Database performance tuning is a necessary step in order to maintain the records while dealing with the RDBMS. But once must be aware of the steps, else it might bring disaster since it is not so easy to manage bulk details. So, below are the tips for database performance tuning that would help most individuals who have to deal with troubles on a regular basis.
1. Avoid using unnecessary Foreign key constraints: Foreign key constraints should should not be overly used. The best example one could talk about is the System tables whose design does not include Foreign keys. This is because despite having a list of benefits it too has drawbacks. Although, in order to avoid data redundancy, FK become really necessary at times, but sometimes things get too complicated in which ALL the tables are somehow linked to one another.
FK also does not allow automated testing. If somebody wants to test a table then all the related tables should contain the data which are not even related to the tables that really require testing. If that is not followed, the database does not allow the operation.
2. Use two heads instead of one: As the size of the database increases, so does the memory requirement. Many DBs allow their users to have multiple physical hard drives in order to split the record. The input-output operations become substantially fast while fetching the record if this kind of setting is there.
3. Use Select commands for limited data: If the less data is retrieved, the query runs faster. Prefer filtering at the server-end rather than the client’s end. The less data being sent through the network, the faster the results are being shown. For instance:
SELECT FirstName, LastName, City
WHERE City = ‘NYC’
The command will fetch the first names, last names, and the Cities from the table Dummy in which the value of City is equal to New York City. You may have noticed that the values of City will be shown “NYC” in ALL the fetched results because this is what the command asks to do. But this is really not necessary because it is quite obvious. So, skip to retrieve this particular column.
4. Consider dropping the index before loading the data: In order to enable the Insert command run faster, one must consider dropping the index before loading a huge bunch of data. The index could be recreated again once the Insert command is executed.
For instance, if there are thousands of rows to be inserted in an online system use a temporary table in order to load the data, provided that this temporary table does not contain index. As compared to loading the details from an external source, transferring the record from one table to another is quicker. Now one may drop the indexes on their primary table, move their information to final table from the temporary ones, and create the indexes once again.
5. Create optimized index: This means that an index should have a right balance in terms of number, fields that are included, and the order in which these are created. If the indexes are too many then it most definitely degrade the overall performance of the SELECT statements.
6. Use EXPLAIN command: This command is highly useful in fine tuning and optimizing the SQL queries. This will help explaining how the statement would be processed, how tables are joined, in what order, and so on.
7. Specify a value in advance to determine memory size: Once the index is created, it needs to have more and memory each time a new data is inserted. The reorganization of the storage space for the database as a result of adding a new row takes a toll on DML queries. If the Database Administrator is familiar, he or she may specify that expected growth for an index. For instance,
WHERE City = ‘NYC’
and phone = ‘200-999-1111’
A few points might still be found missing. Please share that in the comment section if you feel like sharing those extra tips for database performance tuning.