Using DB/2 for IBM i (aka IBM AS/400 or iSeries) I was trying to drop a stored procedure using the command
DROP PROCEDURE TEST/TEST_FTP;
But was getting a message
Message: [SQL0476] Routine TEST_FTP in TEST not unique. Cause . . . . . : Function or procedure TEST_FTP in TEST was specified, not by signature or specific name, and more than one specific instance of the routine was found. Or the procedure was used in a DESCRIBE PROCEDURE or ASSOCIATE LOCATOR statement without a schema name specified while connected to a remote system, and more than one procedure has been called with that name. Recovery . . . : Request the routine either by its specific name, or by its signature (function or procedure name with parameter types). If this is for a DESCRIBE PROCEDURE or ASSOCIATE LOCATOR statement, specify the schema name for the procedure. Try the request again.
Using Ops Navigator, to view the stored procedures I was getting an idex array error message, if I used query to query SYSRoutine in QSYS2, I could see 2 entries for this with slightly different settings in the SPECIFIC_NAME, but the ROUTINE_NAME was the same as TEST_FTP.
After some investigation I found that you could drop and specify the parameters, so I tried
DROP PROCEDURE TEST/TEST_FTP(CHAR(13));
Which successfully make the name unique and allow me to delete one, then managed to call
DROP PROCEDURE TEST/TEST_FTP;
Which then deleted the other one of the same name.
Basically in DB/2 on IBM i and I guess most other SQL databases stored procedures can have the same name as long as they have different parameters. This is essentially the same as method overloading in Java and some other languages where the method (sub routine) names signature is composed of the name part + the parameter list that is passed into it.
Why would a perfectly good SQL based application which has been running well for years suddenly start to run slowly?
It’s easy for us iSeries users to forget about database administration because the OS/400 or i5/OS does such a good job of managing this for us.
From time to time an AS/400 (for that is still what I like to call it) logical file (index) can become invalidated. This typically happens if a job is using it when the subsystem goes down for some reason.
Doing a little research what I’ve found is that if an RPG program with the table specified on an F Spec tries to opens a damaged logical it triggers the system to rebuild the logical during the file open. This isn’t the case when you run an SQL statement. The SQL processor only considers currently valid access paths when it builds its access plan, therefore skipping damaged logical (indexes).
In my case, this was causing the SQL processor to drop back to a full table scan rather than an indexed lookup. This changed the usual runtime of the SQL statement from less than a second to several minutes.
Using OpsNav and the Visual Explain tool you can view the SQL statement and see exactly what it’s doing but it may still not be obvious why it didn’t use the logical view that you know exists.
The tip is to have your System Operator run the EDTRBDAP (Edit Rebuild of Access Paths) command which will list any damaged access paths. The system gives you an estimated time for the fix which so far we’ve found to be quite accurate. It’s best probably to plan to fix the access path out of normal operational hours. This screen does get displayed during an attended IPL but most SysOps I’ve spoken to say they were taught to just press Enter – because the system will sort the problem out later – which is obviously no longer true if you’re using SQL.