Tuesday, August 3, 2010

How to keep session alive in asp.net application


Friends, many times we get requirements to increase the session timeout expiry time on the application server, so what we do basically we go in web.config file and set session time out to some time period, by default its 20 mins, but remember sometime you increase time period in web.config file and it doesn't work why?? The reason is you also have to check your application pool recycling time period it should be more than the session expiry period which you are doing in web.config file.

 Yo! But anyways you did not avoid session expiration in your application, session will still expire in your application let's say if user filling a long form in your application or kept a page open and gone out for coffee. So now what will happen? Session will expire right? By the time use will be back and resume with the work.

Some people do session resetting in code behind by checking if session is null then read User ID () from cookies and reset it. Or if it is intranet application and AD authentication then you get the current logged in user in system and set it in session.Now if you don't wanna do all above crap In your application, what all you wanna do is don't let session to get expire unless user close the browser. To implement this behavior in your applications simply follow below steps.
  1. First of all let me give you some brief about working session state in asp.net. By default it is set to 20 min, so if user sends a request to the server (in simple language browse a page) then server stores the time when request came and extend the session time out to 20 min, so if after 10 min user send again a request so it keeps extended to 20 min, if user don't send any request to server till 20 min then session get expires. This is basic working of session state.

  2. From first step you know that, server need to get any request from client to keep session alive, so now we can think of some methodology to keep pinging the server, below is the script which will keep sending request to the server and will keep session alive.

    Create a SessionCheck.aspx page in your application; you can keep this page blank. So that if it gets called from the browser then it should not be over loaded.

    Using Java Scrip:
    Image will be used to keep session alive by changing image src, assigning this property to some 
    <img id="imgSessionCheck" width="1" height="1" />

    <script type="text/javascript" >
    //Variable used to prevent caching on some browsers and knowing the count how many times user sends request to server.
        var counter;
        counter = 0;

        function KeepSession() {
    // Increase counter value
        counter++;

        // Gets reference of image
        var img = document.getElementById("imgSessionCheck");

        // Set new src value, which will send a request to the server
    img.src = "http://servername:port/appName/SessionCheck.aspx?count=" + counter;

        // now schedule this process to happen in some time interval, in this example its 1 min
        setTimeout(KeepSession, 60000);
    }

        // Call this function for a first time
        KeepSession();
        </script>

    Using JQuery:
    <script language="javascript" type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script>
    <script language="javascript" type="text/javascript">

        function KeepSession() {
        // A request to server
        $.post("http://servername:port/appName/SessionCheck.aspx");

        //now schedule this process to happen in some time interval, in this example its 1 min
        setInterval(KeepSession, 60000);
    }

        // First time call of function
        KeepSession();
     </script>

    So it's simple just you need to put this script in your page.

    Thanks for ready this post.

    Happy Coding!!!

Wednesday, May 26, 2010

Controlling SQL Server 2005 behavior by setting database options regarding NULL values


To access SQL Server there are many Tools and interfaces available, these interfaces can have some default settings to control the database behavior from front end. Right now I am going to talk about most familiar database front end interface SQL Server Management Studio.

To open database Options,

Right Click your database >> Properties >> Options, you will get below screen shot.






ANSI NULL Default:

The default option ANSI NULL DEFAULT corresponds to two session settings ANSI_NULL_DFLT_ON and ANSI_NULL_DFLT_OFF. When ANSI null default database option is false, then the new columns created with the ALTER TABLE and CREATE TABLE statements are by default NOT NULL if Nullability for the column is not explicitly defined.

When this option is set to ON, columns comply with the ANSI SQL-92 rules for column nullability. That is if you don't specifically indicate whether a column in a table allows NULL values, NULLs are allowed. When this option is set to OFF, newly created columns do not allow NULLs if no nullability constrains is specified.

ANSI_NULLS:

Database option ANSI Nulls corresponds to the session settings SET ANSI_NULLS. When this option is set to true, all comparison to a null value evaluate to false. When it is set to false, comparison of non-Unicode values to a null evaluate to true if both values are NULL.

In addition if this option is set to true then your code must use the function IS NULL to determine whether a column has a NULL value. When this option is set to false, SQL Server allows=NULL as a synonym for IS NULL and <> NULL as a synonym for IS NOT NULL.

Below is the code snippet to demonstrate this behavior.

I have a user table in which Email column is having NULL values,

-- This query returns all the rows where email = null, so column to null comparison works

set
ANSI_NULLS
OFF

GO

select
*
from Users where email =
NULL


 

-- This query doesnt return any row, colum to NULL comparison doesnt work

set
ANSI_NULLS
ON

GO

select
*
from Users where email =
NULL


 

ANSI_PADDING:

When this option is set to ON, string being compared with each other are set to the same length before the comparison take place. When this option is OFF, no padding takes place.

ANSI_WARNING:

When this option is set to ON, errors or warnings are issued when conditions such as division by zero or arithmetic overflow occurs.

CONCAT_NULL_YEILDS_NULL:

When this option is set to ON, concatenating two strings results in a NULL string if either of the string is NULL. When this option is set to OFF, a NULL string treated as an empty (zero-length) string for the purpose of concatenation.

Monday, May 17, 2010

Saving changes is not permitted. How to save the changes in table design, column ordering in SQL Server 2005 / 2008


Saving changes is not permitted. The changes you have made require the following tables to be dropped and re-created. You have either made changes to a table that can't be re-created or enabled the option Prevent saving changes that requires the table to be re-created.

Screen shot:





While designing database or during initial development mode I keep changing the database schema (sometimes adding more fields or re-arranging the column positions) I got this popup many times and that makes me to go and write alter script to do the changes in table schema but through alter script I dont know how to change the column position so at last I have to fix this popup and do my changes easily through table designer.

Below are few steps to fix this popup and it will never appear

Step1: in SSMS, go to Tools>>Options…






Step 2: Drill down to Designers >> Table and Database Designers

Step 3: You will see bunch of check boxes there is one which is highlighted in above screen shot "Prevent saving changes that requires table re-creation" uncheck this check box,

Hit OK, that's all you are done, now go to your table designer and add columns, rearrange column position now it will not prompt any popup.

Cheers!!!!

Thursday, May 13, 2010

Table Variable Vs Temp table in SQL Server


In many scenarios we need some temporary table for the processing of data. Now you have two option 1) Table Variable, 2) temp Table. Which one do you choose? Let's talk about differences between these two and make sure our decision to use one of them is best for our requirement.

Table VariableTemp Table
Performance differences
Table variables don't participate in transactions, logging or locking. This means they're faster as they don't require the overhead, but conversely you don't get those features.Temporary Tables are real tables so you can do things like CREATE Indexes, etc. If you have large amounts of data for which accessing by index will be faster than temporary tables are a good option
You can pass table variables back from functions, enabling you to encapsulate and reuse logic much easier (e.g. make a function to split a string into a table of values on some arbitrary delimiter).You can create a temp table using SELECT INTO, which can be quicker to write (good for ad-hoc querying) and may allow you to deal with changing data types over time, since you don't need to define your temp table structure upfront.
A table variable can only have a primary index, A temp table can have indexes
If speed is an issue Table variables can be fasterBut if there are a lot of records, or the need to search the temp table of a clustered index, then a Temp Table would be better.
A table variables don't have column statistics, This means that the query optimizer doesn't know how many rows are in the table variable (it guesses 1), which can lead to highly non-optimal plans been generated if the table variable actually has a large number of rowsWhereas temp tables do have column statistics so the query optimizer can choose different plans for data involving temp tables
You cannot alter a Table variable with DDL statement (so you cannot create a non clustered index on a table variable). That makes every table variable a heap, or at best a table with a single, clustered index, and every table variable access a table scan (or clustered index scan)Temp tables can be altered with DDL statements
User table variable if there less records for processingUse temp table if you have huge record for processing
Syntactical difference
-To create table variable
declare @T table  
(firstColumn varchar(100))


-insert operation
insert into @T select 'some value'

-select statement
select * from @T
-You can create temp table
create table #T 
 (firstColumn varchar(100))

-Insert Operation
insert into #T select 'some value'



-Select statement
select * from #T

 

So now I think you can make wise decision which one to use when. Have fun -J


 

Happy Coding!!!

Tuesday, May 11, 2010

Script for Setting Database Mail and Sending Email From SQL Server 2008


Step1: First of all you need to enable and setup Data mail profile and account. Below is the script just copy and paste to your SQL Server query window and change the parameters accordingly.


/*

    Run this script after setting proper configurations

*/



-- ENABLE DATABASE MAIL FEATURE IN SYSTEM CONFIGURATION

use
master

go

sp_configure
'show advanced options',1

go

reconfigure
with
override

go

sp_configure
'Database Mail XPs',1

go

reconfigure

go



-- DECLARE DATABASE MAIL ACCOUNT PARAMETERS

Declare @qmail_account_name as
varchar(64);

Declare @qmail_account_description as
varchar(64);

Declare @qmail_account_email_address as
varchar(64);

Declare @qmail_account_display_name as
varchar(64);

Declare @qmail_account_username as
varchar(64);

Declare @qmail_account_password as
varchar(64);

Declare @qmail_account_mailserver_name as
varchar(64);



-- DECLARE PROFILE PARAMETERS

declare @qmail_profile_profile_name as
varchar(64);

declare @qmail_profile_description as
varchar(64);



-- DECLARE PRINCIPLES

declare @qprincipal_name as
varchar(16);



-- SET SMTP ACCOUNT DETAILS, USERNAME AND PASSWORD HERE

set @qmail_account_mailserver_name =
'smtp.bizmail.yahoo.com'
-- your smtp address

set @qmail_account_username='something@something.com'
-- username

set @qmail_account_email_address =
'myEmailName@something.com'
-- email address

set @qmail_account_password='password for user (something@something.com)'
-- password for username





set @qmail_account_name =
'YourAccountName (e.g MyAppName_EmailAccount)'

set @qmail_account_description =
'Mail account for administrative e-mail.'

set @qmail_account_display_name =
'MyApplicationName Automated Mailer'



-- PROFILE PARAMETERS ASSIGNMENT

set @qmail_profile_profile_name =
'MyAppNameEmailServiceProfile'

set @qmail_profile_description =
'Profile used for database mail jobs'



--PROFILE TYPE

set @qprincipal_name=
'public'

if ( @qmail_account_mailserver_name =
''
or @qmail_account_username=''
or @qmail_account_email_address =
''
or @qmail_account_password=''
)

begin

        Select
'Please enter SMTP details'
as [Comments]

end

else

Begin

            if
exists(select
*
from msdb.dbo.sysmail_account where [name]=@qmail_account_name)

            begin

            exec msdb.dbo.sysmail_delete_account_sp
@account_name = @qmail_account_name

            end



            EXECUTE msdb.dbo.sysmail_add_account_sp

             @account_name = @qmail_account_name,

                @description = @qmail_account_description,

                @email_address = @qmail_account_email_address,

                @display_name = @qmail_account_display_name,

                @username=@qmail_account_username,

                @password=@qmail_account_password,

                @mailserver_name = @qmail_account_mailserver_name



            if
exists(select
*
from msdb.dbo.sysmail_profile where [name]=@qmail_profile_profile_name)

            begin

            exec msdb.dbo.sysmail_delete_profile_sp
@profile_name = @qmail_profile_profile_name

            end



            EXECUTE msdb.dbo.sysmail_add_profile_sp

                 @profile_name = @qmail_profile_profile_name,

                 @description = @qmail_profile_description



            EXECUTE msdb.dbo.sysmail_add_profileaccount_sp

                @profile_name = @qmail_profile_profile_name,

                @account_name = @qmail_account_name,

                @sequence_number = 1



            EXECUTE msdb.dbo.sysmail_add_principalprofile_sp

                @profile_name = @qmail_profile_profile_name,

                @principal_name = @qprincipal_name,

                @is_default = 1 ;

end


 

 

 
Step2: Create a Sproc which will send email notification. Below is the script for that only you need to change your email To/CC, Subject, Body from your application tables or you can set anything you want. Please make sure Email profile name is proper it should be same as you have configured during Database Email Setup.


if
exists
(select 1 from
sys.objects
where name =
'p_SendEmailNotification'
and
type
=
'p')

drop
procedure p_SendEmailNotification

go



Create
Procedure [dbo].[p_SendEmailNotification]

as

Begin

Declare @ErrorCode as
int

BEGIN
TRANSACTION

declare @MailPriority varchar(8);

declare @sendAddressTo varchar (256);

declare @sendAddressCC varchar(256);

declare @MailBodyFormat varchar(8);

declare @DBMailProfile varchar(32);



set @MailBodyFormat ='HTML'

set @DBMailProfile =
'Email Profile Name which you have created during DB Mail setup (e.g MyAppNameEmailServiceProfile)'

set @MailPriority =
'Normal'



select @sendAddressTo =
'someone@someone.com,abc@abc.com'



select @sendAddressCC ='someone@someone.com,xyz@xyz.com'



EXEC msdb.dbo.sp_send_dbmail

                @profile_name = @DBMailProfile,

                @recipients=@sendAddressTo,

                @copy_recipients = @sendAddressCC,

                @importance = @MailPriority,

                @subject =
'Email Subject',

                @body =
'Email Body....hi this is test email',

                @body_format = @MailBodyFormat ;

               


set @ErrorCode =
@@ERROR


if(@ErrorCode <> 0)


begin


goto ERR_HANDLER


end




COMMIT
TRANSACTION

Set @ErrorCode =
@@Error

if @ErrorCode<>0 begin
goto ERR_HANDLER end



ERR_HANDLER:

if @ErrorCode<>0

Begin


ROLLBACK
TRANSACTION

End

End
You are done, now if you just need to call this sproc and this will send the email, if you want to Automate this email as some alert then better you create a job and within that job call this sproc as per your needed schedule.

 

Tuesday, April 13, 2010

Disaster Recovery Plan in SQL Server


Abstract
In today's world every single data is very important, data keep growing day by day, managing the data becoming challenging, thanks we have got machines which can store our data, what all our brain have to do is to manage those machines and make sure data is safe. The question is how you will make sure that the machine which is storing our data is going to be safe, if tomorrow something happens disaster and machine destroyed then where about your data which was stored in the machine since long time, is that gone ??.

We don't want to lose our data so is the disaster recovery plan comes into the picture, you can restore your data even your current database machine is dead. Busy database administrators can easily push this task onto the back burner with promises to themselves that they'll "take care of backups right after I finish this big project." However, take a moment and consider the value of your data to your organization? What would be the result if an hour's worth of database changes were lost? A day's worth? What about a complete loss of the database?
In this article we are going to learn if your data storage machine is Microsoft SQL Server then what are the ways you can plan your disaster recovery.
Table of content

    1. Disaster Recovery
    2. Disaster recovery in SQL Server

    1. Timely Database Backup Strategies
    2. Database Replication
    3. Log shipping
    4. Database Mirroring
  1. Conclusion
Introduction
Disaster Recovery:
Disaster Recovery plan is your business continuity plan. Disaster recovery is a process that you can use to help recover information systems and data, if a disaster occurs
Suppose you have a database system storing financial application data, an organization whole business is running on some applications and data is stored in database. Lets your database system machine is placed in the same organization, now if tomorrow database machine crashed then???, all your data gone, what about that organization business records, is that organization will be able to continue business?
So if you had solid Disaster Recovery plan for your organization data then your organization business will not stop and it can continue, for the organization only it will be a matter of one system crash. So in disaster recovery what you do is configure some backup mechanism on some other machine to keep the same piece of information what you have on your current database machine. So if your current database machine dead then you can recover your all data from backup machine.
The first step in developing a solid disaster recovery plan is to develop an idea of what constitutes an acceptable loss for your organization. First, consider the impact of losing data stored in your database. Would you be able to recover from the loss of an hour's worth of data? If you're managing a human resources database, chances are that you could deal with this situation by instructing your personnel to reenter data entered during that period. If you're running the database supporting a financial institution, the loss of an hour's data could bring the instant wrath of clients and industry regulators along with significant monetary losses. Protection against these types of losses is provided by backups of the database and use of transaction logging. We'll discuss these in further detail.
Disaster recovery planning is the work that is devoted to preparing all the actions that must occur in response to a disaster. The planning includes the selection of a strategy to help recover valuable data. The selection of the appropriate disaster recovery strategy depends on your business requirements.
Disaster recovery in SQL Server

Microsoft SQL Server database system provides multiple ways to configure disaster recovery.
  • Timely database Backup strategies
  • Database Replication
  • Log shipping
  • Database Mirroring
Disaster Recovery Plan
Timely Database Backup Strategies
If your disaster recovery plan you are planning for timely databases backup then all what you have to do is:
  1. Take backup of all database from the database server (you can schedule a job in sql server to do take backup of all databases)
  2. Copy all back up files to your file server or some other directory
  3. You can write schedule daily/weekly backup plan in sql server
  4. Write your daily backup / weekly back up in disk and store the disk at some safe place
Database Replication
SQL Server replication, a software package included in Microsoft SQL Server, is used to move data from server to server in a transitionally consistent state from one instance to another.
Replication topology consists of three servers – the subscriber, publisher and distributor. The subscribers are the servers receiving the data. The publisher is the server that holds the initial set of data being made available to the subscribers. The distributor is the server that contains many of the settings. It also holds the data as it is moved from the publisher to the subscribers.
Snapshot Replication
Snapshot replication is the easiest replication type to understand because it is conceptually similar to full backup and restore. With snapshot replication the server copies entire set of data to the subscriber at scheduled time, rewriting the data at subscriber with each copy of operation. However snapshot replication does not operate on the complete the database as backup and restore does. Snapshot replication copies only specified articles from publisher to the subscribers. Keep in mind because snapshot replication copies entire set of data every time it runs, you should use this replication type only when the amount of data is small and rather static.
Transactional Replication
Peer-to-peer transactional replication is designed for applications that might read or might modify the data in any database that participates in replication. Additionally, if any servers that host the databases are unavailable, you can modify the application to route traffic to the remaining servers. The remaining servers contain identical copies of the data.
Log shipping
Log shipping is the process of automating the backup of database and transaction log files on a production SQL server, and then restoring them onto a standby server. But this is not all, the key feature of log shipping is that it will automatically backup transaction logs throughout the day (for whatever interval you specify) and automatically restore them on the standby server. This in effect keeps the two SQL Servers in sync. Should the production server fail, all you have to do is point the users to the new server. Well, it's not really that easy, but it comes close if you put enough effort into your log shipping setup
Log shipping involves one Primary Server and one or many secondary server and one monitor server. First transaction log backup of database on primary server is taken and then the backup files are moved to the secondary servers and transaction log backups are applied to each of the secondary databases individually. An optional third server instance, known as the monitor server, records the history and status of backup and restores operations and, optionally, raises alerts if these operations fail to occur as scheduled.
Log shipping consists of three operations:
  • Back up the transaction log at the primary server instance.
  • Copy the transaction log file to the secondary server instance.
  • Restore the log backup on the secondary server instance.
The primary server instance runs the backup job to back up the transaction log on the primary database. This server instance then places the log backup into a primary log-backup file, which it sends to the backup folder. In this figure, the backup folder is on a shared directory—the backup share.
Each of the three secondary server instances runs its own copy job to copy the primary log-backup file to its own local destination folder.
Each secondary server instance runs its own restore job to restore the log backup from the local destination folder onto the local secondary database.
Database mirroring
The primary goal of Database Mirroring technology is to increase the database availability. When enabled, every update to the principal database is immediately reproduced onto the mirror database
Database mirroring is a primarily software solution for increasing database availability. You can only implement mirroring on a per-database basis. Mirroring only works with databases that use the full recovery model. The simple and bulk-logged recovery models do not support database mirroring. Therefore, all bulk operations are always fully logged. It also offers failover capabilities, which may be made automatic depending on how you configure mirroring. The mirrored copy is a standby copy that cannot be accessed directly. It is used only for failover situations
Principal Server: In a database mirroring configuration, there are two copies of a single database, but only one copy is accessible to the clients at any given time. The copy of the database that the applications connect to is called the principal database. The server that hosts the principal database is known as the principal server.
Mirror Server: The mirror is the copy of the principal database. The mirror is always in a restoring state; it is not accessible to the applications. To keep this database up-to-date, the log records are transferred from the principal and applied on the mirror database. The server that hosts the mirror database is known as the mirror server.
Witness Server: The optional witness is an SQL Server instance in a database mirroring configuration. It is separate from the principal and mirror instances. When database mirroring is used in synchronous mode, the witness provides a mechanism for automatic failover
Conclusion
In this article we had discussed about disaster recovery and disaster recovery plan in sql server. Disaster recovery plan is very important for an organization running business by using software application, so data recovery plan you can say is the business continuity plan. There are many ways you can configure disaster recovery for your database, in sql server database you can use Database Replication, Timely Backup Strategies, Log Shipping, and Database Mirroring.

Thursday, March 4, 2010

SQL server best practices and code review check list

Quite frankly let me share with you that i learned T-SQL programming from one of the worst database design which was designed by me. I never realized my database design is going to work for me or not/...what i did was started with development and created bunch of stored procedures, functions, views.... Every sproc having huge data processing logic, table joins, function calls, so i almost used all the syntax of T-SQL programming (including Transaction handling, temp Tables, Cursors, derived table, sub queries.....etc. ).
I never forget one of my sproc in which i used cursor inside cursor and two temp table for each cursors, and processing huge records in the cursor now you can imagine performance of my sproc. Anyhow first phase of my application was developed and deployed successfully.

Now Time to start with next phase....in the beginning i realized that the current database design is not going to support implementation of a small functionality, so what should i do..create new tables...maintain same data in multiple table..oh no..how bad design!!!!!!

Thank God it was not a customer project it was internal application for training purpose.

Then i started looking at my existing database design and started tweaking..... it took me around two weeks to stabilize my existing database ..changing the sprocs..code and making sure that existing functionality does not breaks.

At the end DB was stabilized...

Now let me make sure you do not make such mistakes....so here are some tips which you can take a look before you finalize your database design.


So guys SQL Server code review is very important...review you database design carefully..below are the steps which will help to ensure a good database design...

Database Design Considerations

  • Database Model - Build tables that balance the number of joins (normalization) versus a reasonable amount of denormalization. If you want the application to be highly normalized for the opportunity to extend it for more generic needs, be sure you understand the trade-off of having to join a potentially large number of tables. On the other side of the coin, if your database only has a handful of tables that are 200+ columns with a great deal of duplicate data, you may want to strike a balance.
  • Primary Keys - Ensure a column in each table is designated as the primary key and select an appropriate data type such as an integer.
  • Foreign Keys - Validate all of the foreign keys are established to maintain referential integrity among your tables to ensure the proper relationships are maintained.
  • Auto Increment - If a column needs an incremental value, use an identity for that column as opposed to a custom solution.
  • Correct Data Types - Be sure to select the correct data types for each column. For example, if a varchar column can be used as opposed to a BLOB data type, in general the performance implications will be far less. So choose the data type wisely.
  • Indexing - As new queries are written, columns are added or removed, it is a good time to validate that your indexes will support your performance needs. Validate indexes are not unnecessarily duplicated and validate as indexes are added that the intended queries take advantage of the query. If not, think twice about building a new index that SQL Server will need to use, but your new query is not using.
  • Statistics - Validate sufficient statistics are available on a per column basis or validate based on the database configurations that the indexes will be created.
  • Lookup Tables - As opposed to being forced to hard code values in T-SQL code, lookup the values in a table with the proper referential integrity. Over time the code will be much easier to understand and simpler to maintain.
  • Defaults - As applicable, create defaults for the columns to have a meaningful value for the column.
  • NULL Usage - Review the NULL usage on a per-column basis and determine if NULL is an applicable value or if a business specific value will be used instead which can be enforced via a lookup table.

General T-SQL Considerations

  • Testing Results - Validate the functional testing was completed successfully to ensure you will not face functional issues once the code is deployed and be in a situation where you are asked to change the code on the fly.
  • Response Time Requirements - Validate the performance testing yielded acceptable performance for the new code that is going to be released as well as not negatively impact the existing platform. To me, the only situation worse than having to change code on the fly is the need to firefight a performance issue.
  • Scope of Data - Only process (SELECT, INSERT, UPDATE or DELETE) the needed data. I have seen in a few different applications where SQL Server is flooded with SELECT statements that return unneeded data that is unnecessarily filtered by the front end application. A much more efficient process is to just select the data needed. On another occasion, I have observed duplicate data being unnecessarily inserted into the table. The resolution in this specific scenario is duplicate data that needs to be filtered on the front end.

Stored Procedure Considerations

  • General Review - As you review the code, make sure you do not see any of the following:
    • Unnecessary JOINs
    • Inaccurate calculations or functions
    • Unnecessary data
    • Cartesian product
  • General Technique - Make sure the general coding practice is reasonable. If possible, use SET based logic as opposed to a cursor or a while statement. In addition, avoid logic that processes data in a temp table as opposed to processing the data directly. For example, there is no reason to load data into a temporary table, update the data in the temporary table, delete the data from the base table and then load the data from the temporary table. Just update the needed data in the base table.
  • Query Plan - Review the query plan to validate it is efficient. Validate the following items:
    • Table scanning is not occurring
    • Bookmark lookups are not occurring
    • Code or indexes can be changed to convert index scans to index seeks
  • Optimizer Overrides - Review the code to determine if index hints or NOLOCK clauses are really necessary or if a portion of the code can be changed to not try to out smart the optimizer. These hints could be beneficial in the short term, but as the database or code changes as well as when new versions of SQL Server are released, these overrides can be an unexpected exercise when it comes time to upgrade.
  • Comments - Be sure valuable comments are included with the code; if not, it is rare that comments are added once the code is deployed.

Read this full article on SQL Server Code Review Checklist on


http://www.mssqltips.com/tip.asp?tip=1303



Wednesday, March 3, 2010

How to remove an object with specific property value from the List of Objects using LINQ

Suppose you have an object call Employee (having properties EmployeeID and EmployeeName) and you have List of Employee object (List lstEmployee, now if you want to remove a particular Employee object which has EmployeeID lets say 10 from the list lstEmployee, below is the source code which will give some feel of LINQ.

using System;

using System.Configuration;

using System.Data;

using System.Linq;

using System.Web;

using System.Web.Security;

using System.Web.UI;

using System.Web.UI.HtmlControls;

using System.Web.UI.WebControls;

using System.Web.UI.WebControls.WebParts;

using System.Xml.Linq;

using System.Collections.Generic;

public partial class _Default : System.Web.UI.Page

{

class Employee

{

public int EmployeeId { get; set; }

public string EmployeeName { get; set; }

}

protected void Page_Load(object sender, EventArgs e)

{

List<Employee> lstEmployee = new List<Employee>();

Employee emp = new Employee();

emp.EmployeeId = 1;

emp.EmployeeName = "Name1";

lstEmployee.Add(emp);

emp = new Employee();

emp.EmployeeId = 2;

emp.EmployeeName = "Name2";

lstEmployee.Add(emp);

emp = new Employee();

emp.EmployeeId = 10;

emp.EmployeeName = "Name10";

lstEmployee.Add(emp);

//Below LINQ is used to get all the employee which is not having employee id 10

lstEmployee = lstEmployee.Where(x => x.EmployeeId != 10).ToList();

}

}

Happy Coding !!!!!!!