Sunday, 6 November 2016

UNIQUE index advantage over UNIQUE constraint

Unique constraint and unique index existed for number of years now. However, I came across a problem for which I had to choose the right one as a solution to my problem.

I had a table where ReleaseId can only existed once in a table with either cancelled bit ON or OFF.
 CREATE TABLE ReleaseState
(
       Id int Identity(1,1),
       ReleaseId bigint NOT NULL,
       IsCancelled bit DEFAULT (0)
)

I first created unique constraint.
ALTER TABLE dbo.ReleaseState ADD CONSTRAINT UQ_ReleaseId UNIQUE (ReleaseId)

Then I checked the plan for my query:
IF EXISTS (SELECT * FROM dbo.ReleaseState WHERE ReleaseId = @ReleaseId AND IsCancelled = 1)














As you can see, unique constraint has created unique index internally and that is what SQL Server is using for my query. It is doing a seek on unique index and then retrieving full record for lookup of IsCancelled column.

I want to avoid extra lookup into heap. Can I include a column in the unique index? Create constraint SQL command does not provide me any option to do that. That is the main advantage (as opposed to unique constraint) of using unique index as it allows to take advantage of all index options.

So let’s remove the constraint and add unique index and include extra column value.

CREATE UNIQUE INDEX UX_ReleaseState_ReleaseId ON dbo.ReleaseState(ReleaseId) INCLUDE (IsCancelled)


With unique index, I get following query plan.



Looking into seek operation, I see following:



This confirms that only seek operation is sufficient (with the help of index option INCLUDE) to cover my query and I can have SQL Server avoid extra heap look up for IsCancelled column. 

Friday, 15 August 2014

nHibernate and NOLOCK

I wanted to fire a query in existing session under new transaction with read uncommented mode. This means my query would not wait for any in flight UPDATE commands  on my table.

I  had a following code in my C# class:

ISession session = GetSession();
using (var tran = session.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted))
{
    IQuery query =
    session.CreateQuery(
        "SELECT Count(E.Id) FROM EditorNotification E WHERE
         E.User=:userId AND E.IsRead = 0")
        .SetParameter("userId", userId);

    object uniqueResult = query.UniqueResult();
    int count = Convert.ToInt32(uniqueResult);
    tran.Commit();
    return count;
}

I went to SQL Profiler and wanted to see whether my transaction is really started in read uncommitted mode or not. I don’t want to believe nHibernate until I see it working in SQL Profiler.

So I added "Audit Login" and few TM events in my sql profiler trace.




After executing my c# code, I noticed my Audit Login is still showing isolation level read committed, which is my default isolation level. That does not sound right. My C# code is does not have any bus, its very verbose and clear. So either SQL Profiler is laying or nHibernate is not doing what it supposed to do.




Easy to test who is at fault. In my SQL Management Studio, I opened new query on this DB and ran following command to keep the connection open on EditorNotification table so as to block any read committed isolation queries.

begin transaction
go
update EditorNotification set IsRead =1 where id = 3
go


After running C# code, I noticed that it got blocked and finally timed out waiting for a READ lock on this table. That means nHibernate is not doing what I am telling it to do.

So I found the solution in nHibernate's CreateSQLQuery method where I can use my old days SQL skills and specify directly a query with whatever query hints I want to use. Modified C# code is as follows:

ISession session = GetSession();
IQuery query =
session.CreateSQLQuery(
    string.Format("SELECT Count(E.Id) FROM EditorNotification E 
    WITH (NOLOCK) WHERE E.User_id={0} AND E.IsRead = 0", userId));

object uniqueResult = query.UniqueResult();
int count = Convert.ToInt32(uniqueResult);
return count;


CreateSQLQuery call came to rescue. If you happen to figure out why earlier nHibernate code did not work, please put it as part of comments to this blog.


Monday, 10 February 2014

How to measure number of lines of code using powershell command line

Someone asked me how many lines of code is there in your current platform. I googled to find a simple command line that I can run to find this without installing any special tool. I found following powershell command. Just wanted to share with everyone:

ls * -recurse -include *.aspx, *.ascx, *.cs, *.ps1 | Get-Content | Measure-Object -Line

Thursday, 5 September 2013

Code Quality: Diagnosability of your application

Code coverage, unit tests, static analysis are some of the code quality measures that are typically referred to code quality. I want to touch base on few more important aspects of code quality. Let's start with diagnosability. 

Diagosability means ability to diagnose the issue quickly and accurately. Defects are always going to be there it’s a question of once tester finds a defect, how quickly and accurately can you diagnose the defect and know where to fix?

In other words how your application facilitates troubleshooting and ability to diagnose defects? Developers are always fan of debuggers. If you can attach a debugger, you can find any crazy defect's root cause. However you are not always going to have a liberty of debugging. It takes time and requires exact setup as of the production database along with various different systems involved in order to replicate the issue. You may have conditional logic which gets hit during specific state of the application. Debugging requires you to replicate exact same state in your dev environment so as to step through the code and see which code lines are executed along with values of variables used etc. 

Sometimes (or most of the time) there is not enough time to do all this setup for debugging. Management may demand urgent fix and it’s a fair expectation.

In such scenarios, diagnosability comes to rescue if developers build the application keeping diagnosability aspect in mind. Simplest solution is to add enough logging or trace statements that redirect to either log file, event log or console etc depends on the nature of the application. Many architects prefer event logs for critical errors since it can be monitored through SCOM like tools. 

In .Net we have system.diagnostics namespace that provides all necessary supporting classes to instrument logic flow from application into event logs or trace files. There are many third party paid and open source loggers out there.  

.Net supports info, warn and error as trace settings in diagnostics namespace. You can use this efficiently wherever possible and be able to control the amount of logging. Too much logging can also slow down the performance. 

Also its important to note not to log sensitive data into log files e.g. user names, confidential info, and encrypted info. You can log mostly identifier of the entities and any decision variables so as to determine the state of the application.

However the next question that comes to my mind is how about third party code or shared modules from other teams across the enterprise. Logging standards may not be consistent across the board. It may be a legacy code that you can't change to add diagostability support. For this .Net provides a way to IL inject the code and extend diagnosability of any application. There is one tool that I am evaluating currently called dynaTrace. Ill publish my findings later on it but I would prefer such tools for new projects. Complexity of enterprise applications has gone up exponentially and we are already in the era of multi-platform, multi-device, multi-language, multi-geography, multi-tenant, multi-xyz applications. Diagosability becomes very important and it’s directly proportionate to the maintenance budget. 

Thursday, 8 August 2013

Let's agree to disagree

While doing design or architecture definition projects some clients can afford just one enterprise architect on the team and he is the final decision maker. However the matter complicates when there are multiple enterprise architects or solution architects. As everyone knows, we have wide range of options available in technological landscape; there are multiple ways of architecting a solution to address a business problem. Each architecture design may have different ways, methodologies and practices followed but it can very well achieve the business goal. In Microsoft Technologies itself there are solutions and products that do the similar things. They exist for whatever reasons and we don’t want to go in that discussion here. Those reasons can be valid reasons about their existence e.g. it could be for legacy support etc. The point I want to convey is when I am playing architect role as a consultant in various companies, I come across this conflict very often. Companies spend lots of $$ on discussing and arguing about “build vs buy”, one technology over other, one tool vs other etc. I always ask what are the company standards followed across the enterprise and go from there. But sometimes you need to be innovative to think beyond the best practices that exist because every application is unique. You cannot stop people from thinking differently. So discussion and arguments are bound to happen. It’s a very slippery slope to keep spending time endlessly on such things. 


That is why I see one important quality all architects can have is to “agree to disagree”. In Wiki this is defined as “To tolerate each other's opinion and stop arguing; to acknowledge that an agreement will not be reached”

Monday, 5 August 2013

SOA Services using WCF over msmq

Asynchronous web services support is an important of Service Oriented Architecture (SOA) implementation. This requires a stable and reliable delivery of messages to services. Microsoft has MSMQ platform that guarantees delivery of messages through MSMQ. WCF supports net.msmq binding so that you can turn on existing long running web services into fire and forget type of services. WCF saves lot of coding effort of sending, receiving and peeking MSMQ queues. WCF platform does all that for us just with some configuration file settings. However installing and configuring net.msmq services requires some work and understanding of how non-http activation works. In this post, I am going to start with configuring your machine and make it ready for fire and forget type of services.

Let’s start with installing Windows Communication Foundation http, non-http activation services and also windows Process Activation Service.

 























Make sure following windows services are started and running:
1.       Message Queuing
2.       Net.Msmq Listener Adapter
3.       Windows Process Activation Service

You must be able to see following message in the event viewer:

 
















In some case based on the sequence of your installation of components, you installed .NET 3.5 WCF HTTP Activation and now when you try to visit the WCF service using browser you may get an error. Following is the error message you may get when you when run application that is hosted on Internet Information Services (IIS):

Could not load type 'System.ServiceModel.Activation.HttpModule' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpModule' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.

Visit this link http://support.microsoft.com/kb/2015129 and see the solution to fix it. I did modify my applicationHost.config to add runtimeVersion2.0 as shown below.

<add name="ServiceModel" type="System.ServiceModel.Activation.HttpModule, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="managedHandler,runtimeVersionv2.0" />

At this point I am able to visit my WCF service through web browser. Also when I call WCF service through MSMQ binding, I am able to see my msmq message is picked up by the net.msmq adapter service and finally my WCF service is invoked. I will walk you through my source code in next post.


Saturday, 20 July 2013

Spending too much time debugging?

Ever observed that your programmers are using visual studio debugging too frequently during the development phase? Many team leads or architects may ignore this behavior or may not even observe because it boils down to the basics of programming skills that we take it for granted. Debugging is usually slow cause it takes time for Visual Studio (or any other) debugger to load all the symbols in memory. I think using debugging too frequently during development shows lack of confidence on understanding of how the program works or sometimes lack of understanding of fundamentals of the framework (e.g. .NET, Java, etc). This affects the productivity and velocity of the team.

During the ground up or from scratch development of a program or any functionality, small or big, I prefer to have a mental picture of what is going to happen in my code. I would first spend time on which tasks are involved and create classes and function signatures. These are often called contracts. These are just plain vanilla functions without any stuff in them. If it’s a database, create data model. If it’s a .NET program, create class diagram with properties and functions. If the function has return values, just return empty values etc. so that it will compile. Let’s make sure all the code compiles. No debugging yet.

Then I would write unit tests for the public functions and run the tests. I try to write functions as if they are completely isolated piece of functionality. Obviously all the unit tests will fail here. This takes time, but it’s a foundation for any new program or functionality. May architects call this as Test Driven Development. It makes perfect sense why companies spend thousands of dollars educating programmers to adopt to this practice.

Once unit test development is done, try to add more stuff into the functions. It’s important to note that you don’t know who will call this function in future and what parameter values (valid or invalid) will be passed. We should always validate all parameters first e.g. if its string type of parameter use string.IsNullOrEmpty or string.IsNullOrWhiteSpace and in negative case throw ArgumentNullException. Remember the basic principle “garbage in garbage out” you leaned in college?

No debugging required yet. I would then keep adding more stuff in my function and run unit tests to test it. When all my unit tests are passing, I move on to next function and repeat same steps. Once all the functions are done, run the program end to end. At this point you may encounter integration related issues but they should be very simple to fix cause you have already made sure at each unit function level its works as expected.

I call this very basic programming skill. Many times it can be tempted to not write unit tests for very simple function. You can cover such them through other unit tests, but in my opinion majority of the functions should be having unit tests.

In Visual Studio one can run the program by always pressing F5 which runs under debugger. Pressing Ctrl + F5 runs without debugger. 

There can be exceptions to the process. Sometimes when you are working a defect and you just don’t have time to understand the whole workflow of the program e.g. what happens from the start to end, or it may be someone else’s code and you have given a specific task to fix a very pointed functionality or defect. Someone might have already narrowed down a line of code or function where it needs a fix. This usually happens on maintenance projects. Before you change the line of code you want to check what are the values of the variables and you debug the program. You have option to use logging but if there is any reverse engineering involved debugger is a best friend that comes to your rescue. However as I explained earlier, ground up development of new program or addition of new feature, no compromise on discipline.