The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.
Here’s an excerpt:
A San Francisco cable car holds 60 people. This blog was viewed about 3,600 times in 2014. If it were a cable car, it would take about 60 trips to carry that many people.
To be a proficient DBA, it is very important to automate the mundane tasks. Also, having good knowledge of UNIX can amazingly simplify things for a DBA. My work profile gives me both, database administration and development opportunities. So while upgrading a banking application, I received a list of 200 code mappings to be updated in the default data for the application. The code ‘BxWidTx’ in one of the tables was supposed to be updated to ‘BNKWidTxn’ and so on. It felt terrible to imagine writing 200 similar UPDATE SQLs.
Discussing problems with your colleagues always helps. A senior DBA showed me how it can be done within a few minutes with the help of AWK scripting. I arranged the code mappings in an input file, one per line, in the below format with “|” as field separator between the two values.
Then wrote the AWK script as below:
printf ( “UPDATE bankCode \n”);
printf ( ” SET TxnCode=’%s\n”,$2 “‘”);
printf ( ” WHERE TxnCode=’%s”,$1″‘;\n\n”);
printf ( “CONNECT RESET;\n”);
Now save this code as code_script.awk and execute it with the below command:
awk -f code_script.awk inputParam_file > output_script.sql
awk code_script.awk inputParam_file > output_script.sql
AWK code generates SQL script as the output. For each line in the input file, AWK script prints the UPDATE SQL in the output file. You can add more parameters per line in the input file and refer to them as $3, $4 and so on in your AWK script logic.
So always think of automating a task before doing it manually. Keep learning, keep sharing!!!
Hope this helps!!!
One fine morning when I was looking for something interesting, I got a call from the developer. When developers call you, they think of the DBA as a magician with an immediate solution to every problem that they may have. And the challenge is the strict deadline within which the problem should be resolved.
So here was the challenge, and I was happy to have it. Nothing worth having is ever easy. So the problem was that in the beneficiary details of the banking application, the account number column had values like:
And the desired output was to have only one ‘*’ for each value in the column. So the problem can be reframed as, “Replace the preceding asterisks (even or odd in number) with a single asterisk”.
I struggled through it, but finally came up with a solution with the SQL for pattern replacement. Here it goes,
This works for the aforementioned problem. I strongly believe that a solution exists for every problem, the only challenge is to find it out. For this one, I am wondering if there exists a better or a simpler solution? Please contribute and share if you can think of a simpler solution by commenting on this post. There is great value to discussions and you learn a lot from them. Hope this post will prove helpful to someone at some point in time.
Doing everyday tasks in a better way !!
Being a developer of DB2 LUW, I had written many Migration scripts where I needed to perform data manipulation on bulk data. Most known problem for such operations on bulk data is that most of the time we face transaction log full problem:
“SQL0964C The transaction log for the database is full”
This problem occurs frequently when the amount of data is huge. For example, consider a performance database or production database table having millions of customers and we need to manipulate all the customer data for some value.
To resolve this problem, DBAs usually configure the database parameters such as LOGBUFSZ, LOGFILSIZ, LOGPRIMARY and LOGSECOND. This approach is trail and error approach where we need to set the particular values and try for the operations.
But along with this, over the time I learned ways to perform such bulk operations using the PL/SQL operations.
Note: I use the sample…
View original post 385 more words
Being into the world of DB2 LUW since last 2 years brought me across system performance issues. So as a part of performance tuning exercises, I evaluated a lot of queries for explain plans, to look at what DB2 was doing internally to fetch the result sets? Initially, the different JOIN methods didn’t make much sense to me. One day, I decided to learn about them and scanned through the world wide web, looking for some good articles on DB2 JOIN methods. One of the best articles that caught my eye was the paper titled “Getting the Most from Hash Joins” by Adriana Zubiri, DB2 UDB Performance Team, IBM Toronto Lab. You can look for it on the web if you intend to study about DB2 JOINS in detail. In this article, I am quoting the gist of this and the other papers, as I understand them.
Recently, I did the DB2 10.5 ESE setup on one of the LINUX box. The installation went smooth, but I had an interesting issue while bringing DB2 up. The execution of db2start command gave the below error message:
The database manager failed to allocate shared memory.
This actually happened because I did the DB2 installation on one of the virtual machines which had very limited memory. All that I could see about the above error message on most of the DB2 forums was, “INSTANCE_MEMORY DBM CFG parameter is set to hard value that is too high to be allocated”. Each of those DB2 forums recommended to set it to AUTOMATIC. I tried, but unfortunately it didn’t work, and I was still facing the same error when I tried the db2start command.
Yet again I came across this issue. As a protocol in my organization we use 32-bit DB2 for the development boxes, but unfortunately some of the developers had 64-bit DB2 installed on their machines and they created the TEST database over it. This issue came to light when developer A shared the TEST DB backup with developer B. When developer B tried to restore this backup image which was created using COMPRESS option of the BACKUP command from the 64-bit DB2 to his local 32-bit DB2, he faced the below error:
|SQL2570N An attempt to restore on target OS “NT-32” from a backup
created on source OS “NT-64” failed due to the
incompatibility of operating systems or an incorrect specification
of the restore command. Reason-code: “2”.