Wednesday, November 21, 2007

Unable to register BLL using regasm

Today I faced a problem while registring the Business Logic Layer (Bll) using regasm from command prompt. Error was something related to Access rights while writting registry key i.e. somthing like "Unable to write............ HKEY_CLASSES_ROOT\PostNexusNET_BLL.XMLProcessor\CLSID"

My first opinion was that there must be some access related problem as I have installed a genuine Windows XP (Service Pack 2) in my PC and often branded PCs wont allow you to modify registry entries as a normal user.

When opened registry from run menu using regedit command and reached at the above described location, I found an empty entry named "PostNexusNET_BLL.XMLProcessor" with a child key "CLSID" with no values. I tried to delete, edit and rename that key but could not succeed. I restarted the computer in safe mode and tried the same options, but even then could not succeed. Note that I have assigned full rights/permissions to the "PostNexusNET_BLL.XMLProcessor" key.

After trying so mnay options and reading so many blogs on the interent, found NO solution for two days. I'll consider myself lucky that I found one solution which is described below:

1) Right click the registry key which you are not able to manipulate (edit, delete or rename), then click on Permissions.

2) Select your username from the list, if not present in the list, click add and type the name of the user, Click OK.

3) Then click Advanced, select your user from the list and check the box with "Replace permission entries........". Make sure that other check box is also checked/selected.

Before implementing this methodology make sure that you are logged in as an Administrator.

This solved my problem and now I can register all my dlls.

Sunday, November 4, 2007

Login failed for user 'sa'. The user is not associated with a trusted SQL Server connection (Microsoft SQL Server, Error: 18452)

After installing SQL Express 2005, when you try to login by user 'sa' and password 'user-defined', if you get the following error like: "Login failed for user 'sa'. The user is not associated with a trusted SQL Server connection (Microsoft SQL Server, Error: 18452)", then please read the following carefully:-


During installation, SQL Server Database Engine is set to either Windows Authentication mode or SQL Server and Windows Authentication mode. This topic describes how to change the security mode, after installation.

If Windows Authentication mode is selected during installation, the sa login is disabled. If you later change authentication mode to SQL Server and Windows Authentication mode, the sa login remains disabled. To enable the sa login, use the ALTER LOGIN command.

Security Note: It is very important to choose a strong password for the sa login.

The sa login can only connect to the server using SQL Authentication.

To change security authentication mode
1. In SQL Server Management Studio Object Explorer, right-click your server, and then click Properties.
2. On the Security page, under Server authentication, select the new server authentication mode, and then click OK.
3. In the SQL Server Management Studio dialog box, click OK, to acknowledge the need to restart SQL Server.

To restart SQL Server from SQL Server Management Studio

1. In Object Explorer, right-click your server, and then click Restart. If running, SQL Server Agent must also be restarted.

To enable the sa login
1. Execute the following statements to enable the sa password and assign a password.
ALTER LOGIN sa ENABLE ;
GO
ALTER LOGIN sa WITH PASSWORD = '' ;
GO

Tuesday, October 2, 2007

Unable to make the session state request to the session state server.........HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\.......

Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection.


NOTE: If you have tried every other option (including re-installation of Frameworks, IIS Settings, Starting ASP.NET State Service from servces.msc, etc) to resolve this problem and yet not succeed, then check for any Anti-Virus you have installed at this system. Some anti-virus checks the communication over ports, which indirectly hinders the incoming and outgoing requests from the ASP.NET application. In my case, I have installed NOD32 Antivirus, then opened that Antivirus-->Threat Protection Modules -->IMON -->Setup-->HTTP , then uncheck Automatically detect HTTP communication on other ports. This solved my problem.

I am sure this will sure the above said State Service Problem. Please feel free to reply.

Sunday, June 17, 2007

Serviced Components in VB.NET (COM)

Introduction

.NET is Microsoft's next-generation component technology. .NET simplify component development and deployment, as well as to support interoperability between programming languages. Like COM, .NET provides you with the means to rapidly build binary components, and Microsoft intends for .NET to eventually succeed COM. Like COM, .NET does not provide its own component services. Instead, .NET relies on COM+ to provide it with instance management, transactions, activity-based synchronization, granular role-based security, disconnected asynchronous queued components, and loosely coupled events. The .NET namespace that contains the types necessary to use COM+ services was named System.EnterpriseServices to reflect the pivotal role it plays in building .NET enterprise applications. A .NET component that uses COM+ services is called a serviced component to distinguish it from the standard managed components in .NET.

Note: You must add System.EnterpriseServices namespace to your project references and reference that namespace in your assembly information file.

Developing serviced components

A .NET component that takes advantage of COM+ services needs to derive from the .NET base class ServicedComponent. ServicedComponent is defined in the System.EnterpriseServices namespace. To develop a simple serviced component following steps have to be followed:

  1. Define a class that derives directly or indirectly from the ServicedComponent class. For example, the following code ensures that the MyClass class is hosted by a COM+ application:

    Imports System.EnterpriseServices
    Public Class MyClass Inherits
    ServicedComponent

  2. Apply service attributes to the assembly, class, or method.

    Imports System.EnterpriseServices

    <Transaction(Transactionoption.Required)>
    Public Class MyClass Inherits ServicedComponent
    <Autocomplete()>
    Public Sub DoSomething()
    ......
    End Sub
    End Class


  3. Deploy the serviced component application by registering its assemblies dynamically or manually.

After a serviced component is registered, clients can create instances of the component the way they create instances of any other component.


There are two ways to configure a serviced component to use COM+ services. The first is COM-like: you derive from ServicedComponent, add the component to a COM+ application, and configure it there. The second way is to apply special attributes to the component, configuring it at the source-code level. When the component is added to a COM+ application, it is configured according to the values of those attributes. Attributes are discussed in greater detail below.


.NET allows you to apply attributes to your serviced components with great flexibility. If you do not apply your own attributes, a serviced component is configured using default COM+ settings when it is added to a COM+ application. You can apply as many attributes as you like. A few COM+ services can only be configured via the Component Services Explorer. These services are mostly deployment-specific configurations, such as persistent subscriptions to COM+ events and allocation of users to roles. In general, almost everything you can do with the Component Services Explorer can be done with attributes. It is recommended that you put as many design-level attributes as possible (such as transaction support or synchronization) in the code and use the Component Services Explorer to configure deployment-specific details.

For more information on Registering and Configuring Serviced Components (COM), please refer the followiing link:

http://www.codeproject.com/vb/net/serviced_components.asp

Wednesday, May 30, 2007

Differentiate between a Function and Stored Procedure?

In many instances you can accomplish the same task using either a stored procedure or a function. Both functions and stored procedures can be custom defined and part of any application. Functions, on the other hand, are designed to send their output to a query or T-SQL statement. For example, User Defined Functions (UDFs) can run an executable file from SQL SELECT or an action query, while Stored Procedures (SPROC) use EXECUTE or EXEC to run. Both are instantiated using CREATE FUNCTION.

To decide between using one of the two, keep in mind the fundamental difference between them:

  • Stored procedures are designed to return its output to the application. A UDF returns table variables, while a SPROC can't return a table variable although it can create a table.
  • Another significant difference between them is that UDFs can't change the server environment or your operating system environment, while a SPROC can. Operationally, when T-SQL encounters an error the function stops, while T-SQL will ignore an error in a SPROC and proceed to the next statement in your code (provided you've included error handling support).
  • You'll also find that although a SPROC can be used in an XML FOR clause, a UDF cannot be.

If you have an operation such as a query with a FROM clause that requires a rowset be drawn from a table or set of tables, then a function will be your appropriate choice. However, when you want to use that same rowset in your application the better choice would be a stored procedure.

There's quite a bit of debate about the performance benefits of UDFs vs. SPROCs. You might be tempted to believe that stored procedures add more overhead to your server than a UDF. Depending upon how your write your code and the type of data you're processing, this might not be the case.

A procedure does not return a value; it’s just a block of code that gets executed when called.

In C/C++ procedure and functions are same. In .NET procedures don't return value but functions do. In java there is nothing like procedures.

In database procedures are stored compiled queries and functions are in-built piece of expressions that you can use to build your queries.

Main differences between UDF and Stored Procedure

To decide between using one of the two, keep in mind the fundamental difference between them:

  1. Stored procedures are designed to return its output to the application.
  2. A UDF returns table variables, while a SPROC can't return a table variable although it can create a table.
  3. Another significant difference between them is that UDF’s can't change the server environment or your operating system environment, while a SPROC can.
  4. Operationally, when T-SQL encounters an error the function stops, while T-SQL will ignore an error in a SPROC and proceed to the next statement in your code (provided you've included error handling support). You'll also find that although a SPROC can be used in an XML FOR clause, a UDF cannot be.

Some other differences are listed below:

  1. A Function always returns a value using the return statement while a Procedure may return one or more values through parameters or may not return at all.
  2. A Function can be used in the Sql Queries or SQL statement as a UDF (User-Defined Function) while a Procedure can not be used in sql queries. e.g. Functions can be called inside select statement but not the procedures.
  3. We can use DDL in Procedure using Execute Immediate statement while that is not possible in Functions.
  4. DML statement cannot be used in Function, but it used in Procedure.
  5. Procedure can call in another project but Function work in same project.
  6. We can't have any DDL, DML and TLC command inside a Function, if that function is called from a query. But if the Function is not called from query then we can have all transactional statement (DDL, DML and TLC) inside a function.
  7. Functions can be part of any valid PL/SQL Expression but Procedures cannot be. We need to call procedures standalone.
  8. You can use DDL statements in Functions & Procedures by using execute_immediate package in latest version and for parse sql package in old oracle version. So, regarding using of DDL or DML statements in function or procedure, there is no difference in that context.
  9. Functions are basically pre-compiled, but Procedures are not. That’s why we are able to call functions from select statement but not procedure. In that case, Functions are faster than Procedures.
  10. Stored Procedure accepts both Input/Output parameters where as Function accepts only input parameters. Output parameters: UDFs (User-Defined Functions) don't have the ability to return output parameters to the calling Function. They do however let us return a single scalar value or a locally created table.
  11. We can use a Function inside a Procedure but vice-versa is not possible.
  12. UDFs can accept a smaller number of parameters than stored procedures. UDFs can have up to 1024 parameters, where as Stored Procedures support up to 2100 parameters. This is a relatively minor limitation because most routines require a much smaller number of parameters.
  13. UDFs cannot call stored procedures (except extended procedures), where as stored Procedure can call other procedures.

Tuesday, May 29, 2007

Paging In ASP.NET

http://aspnet.4guysfromrolla.com/articles/091003-1.aspx

Introduction
When making the transition from ASP to ASP.NET, you will discover that paging through database records has become both remarkably simple and more difficult at the same time. The DataGrid control has made it a breeze to create a Web page that allows the user to page through the records of a database query. However, the simplicity offered by the DataGrid's default paging mechanism comes at a cost of performance.

Essentially, the DataGrid offers two modes of paging support: default paging and custom paging. Default paging is the easier of the two to implement, and can be done with just a few lines of code. However, realize that the default paging method retrieves all of the data, but then displays only a small subset of data to be displayed on the current page. That is, every time a user navigates to a different page of the data, the DataGrid re-retrieves all of the data.

When dealing with a small amount of data, this behavior is not a major concern. For small datasets, the simplicity of default paging outweighs its inefficiency, but for large amounts of data, the unnecessary overhead of the default paging can be detrimental to the performance of the Web application and database.

As a solution for the default paging's inefficiency, the DataGrid also provides custom paging, which avoids the default paging's inefficiency by retrieving only that data that belongs that is to be displayed on the current page of data. As its name implies, the custom paging method requires you, the developer, to devise some way to select only those records that need to be displayed on a specific page. There are a number of techniques that can be used to employ custom paging, from stored procedures not unlike the one used in Paging Through Records Using a Stored Procedure, to more complicated SQL expressions. While custom paging is efficient, it requires more complicated programming from the developer.

As you can see, the ease-of-use offered by the DataGrid's paging functionality comes at the expense of either efficiency or development time. There's a larger problem lingering as well. What if you're not using the DataGrid control at all? Neither the Repeater nor the DataList has built-in paging. Obviously, depending on the particular needs of your ASP.NET page, you may not choose to use any of these controls. You may iterate through the results of a query programmatically, as in classic ASP. Simply put, if you don't use the DataGrid, you'll find that paging has actually become more difficult than in ASP. ADO.NET does not support the built-in paging properties and methods of ADO. You are forced to write a paging solution from the ground up.

In the following article, I'll explain how to do just this. The paging solution I'll present will avoid the problems inherent with the DataGrid's paging methods and will be control-independent. That is, the paging solution will work with any of the data controls or with no control at all. It will allow the user to sort records by any column, will not make any unrealistic/limiting assumptions about a table's primary key, and will efficiently retrieve the data required for each page. A screenshot of the paging system in action is shown below.


Database Configuration

At the top of code-behind class, I've stored the database parameters we'll be using throughout the code in variables:

Protected ConnString As String = "server=local;database=Test;Trusted_Connection=true"
Protected TableName As String = "Contacts"
Protected PrimaryKeyColumn As String = "ID"
Protected DefaultSortColumn As String = "DateAdded"
Protected ColumnsToRetrieve As String = "DateAdded,Email,LastName"

These variables will allow you to easily configure the code-behind class to work with different tables/columns. If you are using SQL Server, these are the only lines of code in the code-behind class that must be changed. If you are not using SQL Server, you will need to alter the class to use the namespace(s) and associated classes that pertain to your database.

With regard to the HTML portion, you will simply need to change the column headers and databound columns in the Repeater templates to account for the column names you wish to retrieve. For example, in Paging.aspx you will find the Repeater to have the following HeaderTemplate and ItemTemplate:



You will need to alter the HeaderTemplate and ItemTemplate so that it has column names from the data you are binding to the Repeater. (Of course, you can replace the Repeater with the DataList or DataGrid; the point of using the Repeater in Paging.aspx was to illustrate that this paging solution was not limited only to the DataGrid Web control.)

Examining the Page_Load() Event Handler

The remainder of this article examines the various event handlers and methods in the code-behind class. Let's start by examining the Page_Load event handler, which has three main tasks:

First, it retrieves all the primary key values from the table in question. This operation ensures the accuracy of the paging, as it provides an up-to-date count of the records available for paging:

At this point, you may be wondering, "Isn't this what the DataGrid's default paging does as well?" Actually, it's not. Remember that the DataGrid's default paging grabs all the data available, regardless of what page of data you are viewing. That is, it retrieves all the columns for every row. We're just retrieving one column from all the rows (a column that usually is just integers). Retrieving a single column's worth of data as opposed to every column's data is obviously far more efficient, particularly when dealing with tables with many columns of data. Furthermore, databases automatically index a table by its primary key, so selecting all of the values of just the primary key field(s) is very quick as a simple scan of the index can be performed without any table data accesses (if this makes no sense to you, don't worry! Just realize reading the values of a primary key is not an expensive operation).

After retrieving the primary key values, the Page_Load event handler uses a SqlDataReader to read the values into an ArrayList and then closes the database connection. This saves these values in a disconnected, easily manipulated format:

(For those of you unfamiliar with ArrayLists, they are a type of collection found in the .NET Framework that allows for array-style handling of data. Unlike traditional arrays, however, ArrayLists behave in a far more intuitive fashion and do not need to be "re-dimmed" as they grow. There is also a myriad of helpful ArrayList methods that allow for fast sorting and manipulation of the data contained within the collection. For more information on the ArrayList check out this tutorial.)

Why not just use a DataSet, as opposed to reading the SqlDataReader's data into an ArrayList? I've found the latter method to be a bit faster and user friendly. The DataSet involves a lot of complexity that we don't need for the code in question and a great deal of overhead that we don't want. In a way, storing the SqlDataReader's data in an ArrayList provides us with a stripped-down, efficient "dataset" that is perfect for our needs.

Finally, after creating the ArrayList, if the page has not been posted back, the Page_Load event handler calls the Paging() method, displaying the first page of records when Paging.aspx first loads:

.NET Assembly

http://en.wikipedia.org

For the counterpart to assembly language in the Microsoft .NET framework, see Common Intermediate Language.

In the Microsoft .NET framework an assembly is a partially compiled code library for use in deployment, versioning and security. In the Microsoft Windows implementation of .NET, an assembly is a PE (portable executable) file. There are two types, process assemblies (EXE) and library assemblies (DLL). A process assembly represents a process which will use classes defined in library assemblies. In version 1.1 of the CLR classes can only be exported from library assemblies; in version 2.0 this restriction is relaxed. The compiler will have a switch to determine if the assembly is a process or library and will set a flag in the PE file. .NET does not use the extension to determine if the file is a process or library. This means that a library may have either .dll or .exe as its extension.

The code in an assembly is compiled into MSIL, which is then compiled into machine language at runtime by the CLR.

An assembly can consist of one or more files. Code files are called modules. An assembly can contain more than one code module and since it is possible to use different languages to create code modules this means that it is technically possible to use several different languages to create an assembly. In practice this rarely happens, principally because Visual Studio only allows developers to create assemblies that consist of a single code module.

Assembly names

The name of an assembly consists of four parts:
  1. The short name. On Windows this is the name of the PE file without the extension.
  2. The culture. This is an RFC 1766 identifier of the locale for the assembly. In general, library and process assemblies should be culture neutral; the culture should only be used for satellite assemblies.
  3. The version. This is a dotted number made up for 4 values — major, minor, build and revision. The version is only used if the assembly has a strong name (see below).
  4. A public key token. This is a 64-bit hash of the public key which corresponds to the private key used to sign[1] the assembly. A signed assembly is said to have a strong name.

The public key token is used to make the assembly name unique. Thus, two strong named assemblies can have the same PE file name and yet .NET will recognize them as different assemblies. The Windows file system (FAT32 and NTFS) only recognizes the PE file name, so two assemblies with the same PE file name (but different culture, version or public key token) cannot exist in the same Windows folder. To solve this issue .NET introduces something called the GAC (Global Assembly Cache) which is treated as a single folder by the .NET CLR, but is actually implemented using nested NTFS (or FAT32) folders.

To prevent spoofing attacks, where a cracker would try to pass off an assembly appearing as something else, the assembly is signed with a private key. The developer of the intended assembly keeps the private key secret, so a cracker cannot have access to it, and cannot guess the associated public key. Thus the cracker cannot make his assembly impersonate something else. Signing the assembly involves taking a hash of important parts of the assembly and then encrypting the hash with the private key. The signed hash is stored in the assembly along with the public key. The public key will decrypt the signed hash. When the CLR loads a strongly named assembly it will generate a hash from the assembly and then compare this with the decrypted hash. If the comparison succeeds then it means that the public key in the file (and hence the public key token) is associated with the private key used to sign the assembly. This will mean that the public key in the assembly is the public key of the assembly publisher and hence a spoofing attack is thwarted.

Assemblies and .NET security

.NET Code Access Security is based on assemblies and evidence. Evidence can be anything deduced from the assembly, but typically it is created from the source of the assembly — whether the assembly was downloaded from the Internet, an intranet, or installed on the local machine (if the assembly is downloaded from another machine it will be stored in a sandboxed location within the GAC and hence is not treated as being installed locally). Permissions are applied to entire assemblies, and an assembly can specify the minimum permissions it requires through custom attributes (see .NET metadata). When the assembly is loaded the CLR will use the evidence for the assembly to create a permission set of one or more code access permissions. The CLR will then check to make sure that this permission set contains the required permissions specified by the assembly.

.NET code can perform a code access security demand. This means that the code will perform some privileged action only if all of the assemblies of all of the methods in the call stack have the specified permission. If one assembly does not have the permission a security exception is thrown.

The .Net code can also perform Linked Demand for getting the permission from the call stack. In this case the CLR will look for only one method in the call stack in the TOP position has the specified permission. Here the stack walk through is bound to one method in the call stack by which the CLR assumes that all the other methods in the CALL STACK have the specified permission.

Private and shared assemblies

When a developer compiles code the compiler will put the name of every library assembly it uses in the compiled assembly's .NET metadata. When the CLR executes the code in the assembly it will use this metadata to locate the assembly using a technology called Fusion. If the called assembly does not have a strong name, then Fusion will only use the short name (the PE file name) to locate the library. In effect this means that the assembly can only exist in the application folder, or in a subfolder, and hence it is called a private assembly because it can only be used by a specific application. Versioning is switched off for assemblies that do not have strong names, and so this means that it is possible for a different version of an assembly to be loaded than the one that was used to create the calling assembly.

The compiler will store the complete name (including version) of strongly named assembly in the metadata of the calling assembly. When the called assembly is loaded, Fusion will ensure that only an assembly with the exact name, including the version, is loaded. Fusion is configurable, and so you can provide an application configuration file to tell Fusion to use a specific version of a library when another version is requested.

Shared assemblies are stored in the GAC. This is a system-wide cache and all applications on the machine can use any assembly in the cache. To the casual user it appears that the GAC is a single folder, however, it is actually implemented using FAT32 or NTFS nested folders which means that there can be multiple versions (or cultures) of the same.

Satellite assemblies

In general, assemblies should only contain culture-neutral resources. If you want to localize your assembly (for example use different strings for different locales) you should use satellite assemblies — special, resource-only assemblies. Satellites are not loaded by Fusion and so they should not contain code. As the name suggests, a satellite is associated with an assembly called the main assembly. That assembly (say, lib.dll) will contain the neutral resources (which Microsoft says is International English, but implies to be US English). Each satellite has the name of the associated library appended with .resources (for example lib.resources.dll). The satellite is given a non-neutral culture name, but since this is ignored by existing Windows file systems (FAT32 and NTFS) this would mean that there could be several files with the same PE name in one folder. Since this is not possible, satellites must be stored in subfolders under the application folder. For example, a satellite with the UK English resources will have a .NET name of "lib.resources Version=0.0.0.0 Culture=en-GB PublicKeyToken=null", a PE file name of lib.resources.dll, and will be stored in a subfolder called en-GB.

Satellites are loaded by a .NET class called System.Resources.ResourceManager. The developer has to provide the name of the resource and information about the main assembly (with the neutral resources). The ResourceManager class will read the locale of the machine and use this information and the name of the main assembly to get the name of the satellite and the name of the subfolder that contains it. ResourceManager can then load the satellite and obtain the localized resource.

Fusion

File systems in common use by Windows (FAT32, NTFS, CDFS, etc.) are restrictive because the file names do not include information like versioning or localization. This means that two different versions of a file cannot exist in the same folder unless their names have versioning information. Fusion is the Windows loader technology that allows versioning and culture information to be used in the name of a .NET assembly that is stored on these filesystems. Despite being the exclusive system for loading a managed assembly into a process, Fusion is also currently used to load Win32 assemblies independent of managed assembly loading.
Fusion uses a specific search order when it looks for an assembly.

  1. If the assembly is strongly named it will first look in the GAC.
  2. Fusion will then look for redirection information in the application's configuration file. If the library is strongly named then this can specify that another version should be loaded, or it can specify an absolute address of a folder on the local hard disk, or the URL of a file on a web server. If the library is not strongly named, then the configuration file can specify a subfolder beneath the application folder to be used in the search path.
  3. Fusion will then look for the assembly in the application folder with either the extension .exe or .dll.
  4. Fusion will look for a subfolder with the same name as the short name (PE file name) of the assembly and then looks for the assembly in that folder with either the extension .exe or .dll.


If Fusion cannot find the assembly, the assembly image is bad, or if the reference to the assembly doesn't match the version of the assembly found, it will throw an exception. In addition, information about the name of the assembly, and the paths that it checked, will be stored. This information may be viewed by using the Fusion log viewer (fuslogvw), or if a custom location is configured, directly from the HTML log files generated.

Referencing assemblies

One can reference an executable code library by using the /reference flag of the C# compiler.

Delaysigning of an assembly

The shared assemblies need to give a strong name for uniquely identifying the assembly which might be shared among the applications. The strong naming consists of the public key token, culture, version and PE file name. If an assembly is likely to be used for the development purpose which is a shared assembly, the strong naming procedure contains only public key generation. The private key is not generated at that time. It is generated only when the assembly is deployed.

Language of an assembly

The assembly is built up with the MSIL code. MSIL code is nothing but assembly language coding. The framework internally converts the high level language code into assembly code. If we have a program that prints "Hello world", the equivalent MSIL code is:.method private hidebysig static void Main(string[] args) cil managed {
.entrypoint
.custom instance void [mscorlib]System.STAThreadAttribute::.ctor() = ( 01 00 00 00 )
// Code size 11 (0xb)
.maxstack 1
IL_0000: ldstr "Hello World"
IL_0005: call void [mscorlib]System.Console::WriteLine(string)
IL_000a: ret } // end of method Class1::Main
So the assembly code loads the String first into stack. Then it calls the Writeline function and stores the address where the control should return after the function is over.

Garbage Collection in ASP.NET

About garbage collection

Every program uses resources of one sort or another-memory buffers, network connections, database resources, and so on. In fact, in an object-oriented environment, every type identifies some resource available for a program's use. To use any of these resources, memory must be allocated to represent the type.

The steps required to access a resource are as follows:
  1. Allocate memory for the type that represents the resource.
  2. Initialize the memory to set the initial state of the resource and to make the resource usable.
  3. Use the resource by accessing the instance members of the type (repeat as necessary).
  4. Tear down the state of the resource to clean up.
  5. Free the memory.

The garbage collector (GC) of .NET completely absolves the developer from tracking memory usage and knowing when to free memory.

The Microsoft® .NET CLR (Common Language Runtime) requires that all resources be allocated from the managed heap. You never free objects from the managed heap-objects are automatically freed when they are no longer needed by the application.

Memory is not infinite. The garbage collector must perform a collection in order to free some memory. The garbage collector's optimizing engine determines the best time to perform a collection, (the exact criteria is guarded by Microsoft) based upon the allocations being made. When the garbage collector performs a collection, it checks for objects in the managed heap that are no longer being used by the application and performs the necessary operations to reclaim their memory.

However for automatic memory management, the garbage collector has to know the location of the roots i.e. it should know when an object is no longer in use by the application. This knowledge is made available to the GC in .NET by the inclusion of a concept know as metadata. Every data type used in .NET software includes metadata that describes it. With the help of metadata, the CLR knows the layout of each of the objects in memory, which helps the Garbage Collector in the compaction phase of Garbage collection. Without this knowledge the Garbage Collector wouldn't know where one object instance ends and the next begins.

Garbage Collection Algorithm

Application Roots

Every application has a set of roots. Roots identify storage locations, which refer to objects on the managed heap or to objects that are set to null.

For example:

  • All the global and static object pointers in an application.
  • Any local variable/parameter object pointers on a thread's stack.
  • Any CPU registers containing pointers to objects in the managed heap.
  • Pointers to the objects from Freachable queue
  • The list of active roots is maintained by the just-in-time (JIT) compiler and common language runtime, and is made accessible to the garbage collector's algorithm.

Implementation

Garbage collection in .NET is done using tracing collection and specifically the CLR implements the Mark/Compact collector.

This method consists of two phases as described below.

Phase I: Mark

Find memory that can be reclaimed.

When the garbage collector starts running, it makes the assumption that all objects in the heap are garbage. In other words, it assumes that none of the application's roots refer to any objects in the heap.

The following steps are included in Phase I:

  • The GC identifies live object references or application roots.
  • It starts walking the roots and building a graph of all objects reachable from the roots.
  • If the GC attempts to add an object already present in the graph, then it stops walking down that path. This serves two purposes. First, it helps performance significantly since it doesn't walk through a set of objects more than once. Second, it prevents infinite loops should you have any circular linked lists of objects. Thus cycles are handles properly.

Once all the roots have been checked, the garbage collector's graph contains the set of all objects that are somehow reachable from the application's roots; any objects that are not in the graph are not accessible by the application, and are therefore considered garbage.

Phase II: Compact

Move all the live objects to the bottom of the heap, leaving free space at the top.

Phase II includes the following steps:

  • The garbage collector now walks through the heap linearly, looking for contiguous blocks of garbage objects (now considered free space).
  • The garbage collector then shifts the non-garbage objects down in memory, removing all of the gaps in the heap.
  • Moving the objects in memory invalidates all pointers to the objects. So the garbage collector modifies the application's roots so that the pointers point to the objects' new locations.
  • In addition, if any object contains a pointer to another object, the garbage collector is responsible for correcting these pointers as well.

After all the garbage has been identified, all the non-garbage has been compacted, and all the non-garbage pointers have been fixed-up, a pointer is positioned just after the last non-garbage object to indicate the position where the next object can be added.

Finalization

.NET Framework's garbage collection implicitly keeps track of the lifetime of the objects that an application creates, but fails when it comes to the unmanaged resources (i.e. a file, a window or a network connection) that objects encapsulate.

The unmanaged resources must be explicitly released once the application has finished using them. .NET Framework provides the Object.Finalize method: a method that the garbage collector must run on the object to clean up its unmanaged resources, prior to reclaiming the memory used up by the object. Since Finalize method does nothing, by default, this method must be overridden if explicit cleanup is required.

It would not be surprising if you will consider Finalize just another name for destructors in C++. Though, both have been assigned the responsibility of freeing the resources used by the objects, they have very different semantics. In C++, destructors are executed immediately when the object goes out of scope whereas a finalize method is called once when Garbage collection gets around to cleaning up an object.

The potential existence of finalizers complicates the job of garbage collection in .NET by adding some extra steps before freeing an object.

Whenever a new object, having a Finalize method, is allocated on the heap a pointer to the object is placed in an internal data structure called Finalization queue. When an object is not reachable, the garbage collector considers the object garbage. The garbage collector scans the finalization queue looking for pointers to these objects. When a pointer is found, the pointer is removed from the finalization queue and appended to another internal data structure called Freachable queue, making the object no longer a part of the garbage. At this point, the garbage collector has finished identifying garbage. The garbage collector compacts the reclaimable memory and the special runtime thread empties the Freachable queue, executing each object's Finalize method.

The next time the garbage collector is invoked, it sees that the finalized objects are truly garbage and the memory for those objects is then, simply freed.

Thus when an object requires finalization, it dies, then lives (resurrects) and finally dies again. It is recommended to avoid using Finalize method, unless required. Finalize methods increase memory pressure by not letting the memory and the resources used by that object to be released, until two garbage collections. Since you do not have control on the order in which the finalize methods are executed, it may lead to unpredictable results.

Garbage Collection Performance Optimizations

  • Weak references
  • Generations

Weak References

Weak references are a means of performance enhancement, used to reduce the pressure placed on the managed heap by large objects.

When a root points to an abject it's called a strong reference to the object and the object cannot be collected because the application's code can reach the object.

When an object has a weak reference to it, it basically means that if there is a memory requirement & the garbage collector runs, the object can be collected and when the application later attempts to access the object, the access will fail. On the other hand, to access a weakly referenced object, the application must obtain a strong reference to the object. If the application obtains this strong reference before the garbage collector collects the object, then the GC cannot collect the object because a strong reference to the object exists.

The managed heap contains two internal data structures whose sole purpose is to manage weak references: the short weak reference table and the long weak reference table.

Weak references are of two types:

  1. A short weak reference doesn't track resurrection.
    i.e. the object which has a short weak reference to itself is collected immediately without running its finalization method.
  2. A long weak reference tracks resurrection.
    i.e. the garbage collector collects object pointed to by the long weak reference table only after determining that the object's storage is reclaimable. If the object has a Finalize method, the Finalize method has been called and the object was not resurrected.

These two tables simply contain pointers to objects allocated within the managed heap. Initially, both tables are empty. When you create a WeakReference object, an object is not allocated from the managed heap. Instead, an empty slot in one of the weak reference tables is located; short weak references use the short weak reference table and long weak references use the long weak reference table.

Consider an example of what happens when the garbage collector runs. The diagrams (Figure 1 & 2) below show the state of all the internal data structures before and after the GC runs.




Now, here's what happens when a garbage collection (GC) runs:

  1. The garbage collector builds a graph of all the reachable objects. In the above example, the graph will include objects B, C, E, G.
  2. The garbage collector scans the short weak reference table. If a pointer in the table refers to an object that is not part of the graph, then the pointer identifies an unreachable object and the slot in the short weak reference table is set to null. In the above example, slot of object D is set to null since it is not a part of the graph.
  3. The garbage collector scans the finalization queue. If a pointer in the queue refers to an object that is not part of the graph, then the pointer identifies an unreachable object and the pointer is moved from the finalization queue to the freachable queue. At this point, the object is added to the graph since the object is now considered reachable. In the above example, though objects A, D, F are not included in the graph they are treated as reachable objects because they are part of the finalization queue. Finalization queue thus gets emptied.
  4. The garbage collector scans the long weak reference table. If a pointer in the table refers to an object that is not part of the graph (which now contains the objects pointed to by entries in the freachable queue), then the pointer identifies an unreachable object and the slot is set to null. Since both the objects C and F are a part of the graph (of the previous step), none of them are set to null in the long reference table.
  5. The garbage collector compacts the memory, squeezing out the holes left by the unreachable objects. In the above example, object H is the only object that gets removed from the heap and its memory is reclaimed.

Generations

Since garbage collection cannot complete without stopping the entire program, they can cause arbitrarily long pauses at arbitrary times during the execution of the program. Garbage collection pauses can also prevent programs from responding to events quickly enough to satisfy the requirements of real-time systems.

One feature of the garbage collector that exists purely to improve performance is called generations. A generational garbage collector takes into account two facts that have been empirically observed in most programs in a variety of languages:

  1. Newly created objects tend to have short lives.
  2. The older an object is, the longer it will survive.

Generational collectors group objects by age and collect younger objects more often than older objects. When initialized, the managed heap contains no objects. All new objects added to the heap can be said to be in generation 0, until the heap gets filled up which invokes garbage collection. As most objects are short-lived, only a small percentage of young objects are likely to survive their first collection. Once an object survives the first garbage collection, it gets promoted to generation 1.Newer objects after GC can then be said to be in generation 0.The garbage collector gets invoked next only when the sub-heap of generation 0 gets filled up. All objects in generation 1 that survive get compacted and promoted to generation 2. All survivors in generation 0 also get compacted and promoted to generation 1. Generation 0 then contains no objects, but all newer objects after GC go into generation 0.

Thus, as objects "mature" (survive multiple garbage collections) in their current generation, they are moved to the next older generation. Generation 2 is the maximum generation supported by the runtime's garbage collector. When future collections occur, any surviving objects currently in generation 2 simply stay in generation 2.

Thus, dividing the heap into generations of objects and collecting and compacting younger generation objects improves the efficiency of the basic underlying garbage collection algorithm by reclaiming a significant amount of space from the heap and also being faster than if the collector had examined the objects in all generations.

A garbage collector that can perform generational collections, each of which is guaranteed (or at least very likely) to require less than a certain maximum amount of time, can help make runtime suitable for real-time environment and also prevent pauses that are noticeable to the user.

Garbage Collection in Various Environments

One of the new features inherently available in the .Net framework is automatic garbage collection. The term garbage collection can be defined as management of the allocation and release of memory in an application. In C++, developers are responsible for allocating memory for objects created in the application and releasing the memory when the object is no longer needed. COM introduced a new model for memory management - Reference counting. Programmers were only responsible for incrementing the reference count when the object is referenced and decrementing the counter when the object goes out of scope. When the object's reference count reaches zero, the object is deleted and the memory gets freed. Both these schemes are dependent on the developer and result from time to time in memory leaks and application exceptions - conditions that occur at run-time and are not detectable at compilation.

Garbage Collection in .Net

The garbage collector in .Net takes care of bulk of the memory management responsibility, freeing up the developer to focus on core issues. The garbage collector is optimized to perform the memory free-up at the best time based upon the allocations being made. Java developers have enjoyed the benefits of Garbage collection. VB developers are also used to a certain amount of flexibility in these terms and .Net provides full-fledged memory management capabilities for managed resources.

Release Unmanaged Resources - Runtime Garbage Collector

The .Net developer is still responsible for tracking and managing "unmanaged" resources. An example of unmanaged resources is Operating System resources such as file, window or network connection. The framework can track when the unmanaged resource needs to be terminated, but it does not have information on how to terminate the resource and free up the memory. For clean-up of these resources, the framework provides destructors in C# and Managed Extensions for C++ and the Finalize method for other programming languages. The developer must override the Finalize method (or the destructor for C#, Managed Extensions) to release and terminate the unmanaged resources.

When the Garbage collector executes, it does not delete objects which have the Finalize method overridden. Instead, it adds them to a separate list called the Finalization queue. A special runtime thread becomes active and calls Finalize methods for these objects and then removes them from this list. When the garbage collector runs for the next time, these objects are terminated and the memory is released.

Release Unmanaged Resources - Application Developer

The Finalize method should be invoked by the Framework directly and should not allow access to invocation from the application. The type's Dispose or Close method is available to developers for application clean-up . The Finalize method becomes a safety-catch in case the application does not call the Dispose method for some reason. You should implement the Dispose method for the type and invoke the parent type's Dispose method. In the Dispose method, you can call the GC.SuppressFinalize method to prevent the Finalize method from being invoked for this object, as the resources have already been released in the Dispose method. This allows you to implement the Dispose method to release managed as well as unmanaged objects. You can provide a Boolean parameter to the Dispose method to indicate disposal of managed resources. In case the application does not invoke the Dispose Method, the runtime invokes the Finalize method on that object thus avoiding potential problems. The Finalize method can be implemented to invoke the same Dispose method , with a parameter of false , so that only the unmanaged resources are released and the runtime takes care of the managed resources.

The "using" statement available in C# automatically calls Dispose and provides a convenient way to deal with unmanaged resources which are required for a lifetime that extends within the method in which the objects are created. In other languages, you can use the Finally block to release the resources used within a Try block.

Programmatically Invoking the Garbage Collector

In applications with significant memory requirements, you can force garbage collection by invoking the GC.Collect method from the program. This is not recommended and should be used only in extreme cases.

System.GC class

The System.GC class provides methods that control the system garbage collector. Use methods from this class in your application with extreme caution.

Refer to this link also for more information
http://www.codeproject.com/managedcpp/garbage_collection.asp

1) Is garbage collections is reference type or value type?
Ans. Reference type

2) Does Garbage collector destroy value links or reference Links?
Ans. Garbage Collector deals only with managed heap, hence only Reference types. Value types will be popped out of the stack as they go out of scope, so u need not worry about them.

The ASP.NET Page Life Cycle

Introduction

When a page request is sent to the Web server, whether through a submission or location change, the page is run through a series of events during its creation and disposal. When we try to build ASP.NET pages and this execution cycle is not taken into account, we can cause a lot of headaches for ourselves. However, when used and manipulated correctly, a page's execution cycle can be an effective and powerful tool. Many developers are realizing that understanding what happens and when it happens is crucial to effectively writing ASP.NET pages or user controls. So let's examine in detail the ten events of an ASP.NET page, from creation to disposal. We will also see how to tap into these events to implant our own custom code.

I'll set the stage with a simple submission form written in ASP.NET with C#. The page is loaded for the first time and has several server-side Web controls on it. When the Web server receives a request for the page, it will process our Web controls and we will eventually get rendered HTML. The first step in processing our page is object initialization.

1. Object Initialization

A page's controls (and the page itself) are first initialized in their raw form. By declaring your objects within the constructor of your C# code-behind file (see Figure 1), the page knows what types of objects and how many to create. Once you have declared your objects within your constructor, you may then access them from any sub class, method, event, or property. However, if any of your objects are controls specified within your ASPX file, at this point the controls have no attributes or properties. It is dangerous to access them through code, as there is no guarantee of what order the control instances will be created (if they are created at all). The initialization event can be overridden using the OnInit method.


Figure 1 - Controls are initialized based on their declaration.

2. Load Viewstate Data

After the Init event, controls can be referenced using their IDs only (no DOM is established yet for relative references). At LoadViewState event, the initialized controls receive their first properties: viewstate information that was persisted back to the server on the last submission. The page viewstate is managed by ASP.NET and is used to persist information over a page roundtrip to the server. Viewstate information is saved as a string of name/value pairs and contains information such as control text or value. The viewstate is held in the value property of a hidden control that is passed from page request to page request. As you can see, this is a giant leap forward from the old ASP 3.0 techniques of maintaining state. This event can be overridden using the LoadViewState method and is commonly used to customize the data received by the control at the time it is populated. Figure 2 shows an example of overriding and setting viewstate at the LoadViewState event.


Figure 2 - When LoadViewState is fired, controls are populated with the appropriate viewstate data.

3. LoadPostData Processes Postback Data

During this phase of the page creation, form data that was posted to the server (termed postback data in ASP.NET) is processed against each control that requires it. When a page submits a form, the framework will implement the IPostBackDataHandler interface on each control that submitted data. The page then fires the LoadPostData event and parses through the page to find each control that implements this interface and updates the control state with the correct postback data. ASP.NET updates the correct control by matching the control's unique ID with the name/value pair in the NameValueCollection. This is one reason that ASP.NET requires unique IDs for each control on any given page. Extra steps are taken by the framework to ensure each ID is unique in situations, such as several custom user controls existing on a single page. After the LoadPostData event triggers, the RaisePostDataChanged event is free to execute (see below).

4. Object Load

Objects take true form during the Load event. All object are first arranged in the page DOM (called the Control Tree in ASP.NET) and can be referenced easily through code or relative position (crawling the DOM). Objects are then free to retrieve the client-side properties set in the HTML, such as width, value, or visibility. During Load, coded logic, such as arithmetic, setting control properties programmatically, and using the StringBuilder to assemble a string for output, is also executed. This stage is where the majority of work happens. The Load event can be overridden by calling OnLoad as shown in Figure 3.


Figure 3 - The OnLoad event is an ideal location to place logic.

5. Raise PostBack Change Events

As stated earlier, this occurs after all controls that implement the IPostBackDataHandler interface have been updated with the correct postback data. During this operation, each control is flagged with a Boolean on whether its data was actually changed or remains the same since the previous submit. ASP.NET then sweeps through the page looking for flags indicating that any object's data has been updated and fires RaisePostDataChanged. The RaisePostDataChanged event does not fire until all controls are updated and after the Load event has occurred. This ensures data in another control is not manually altered during the RaisePostDataChanged event before it is updated with postback data.

6. Process Client-Side PostBack Event

After the server-side events fire on data that was changed due to postback updates, the object which caused the postback is handled at the RaisePostBackEvent event. The offending object is usually a control that posted the page back to the server due to a state change (with autopostback enabled) or a form submit button that was clicked. There is often code that will execute in this event, as this is an ideal location to handle event-driven logic. The RaisePostBackEvent event fires last in the series of postback events due to the accuracy of the data that is rendered to the browser.

Controls that are changed during postback should not be updated after the executing function is called due to the consistency factor. That is, data that is changed by an anticipated event should always be reflected in the resulting page. The RaisePostBackEvent can be trapped by catching RaisePostBackEvent, as in Figure 4.


Figure 4 - The RaisePostDataChanged and RaisePostBackEvent events are defined by the IPostBackDataHandler interface.

7. Prerender the Objects

The point at which the objects are prerendered is the last time changes to the objects can be saved or persisted to viewstate. This makes the PreRender step a good place to make final modifications, such as changing properties of controls or changing Control Tree structure, without having to worry about ASP.NET making changes to objects based off of database calls or viewstate updates. After the PreRender phase those changes to objects are locked in and can no longer be saved to the page viewstate. The PreRender step can be overridden using OnPreRender

8. ViewState Saved

The viewstate is saved after all changes to the page objects have occurred. Object state data is persisted in the hidden object and this is also where object state data is prepared to be rendered to HTML. At the SaveViewState event, values can be saved to the ViewState object, but changes to page controls are not. You can override this step by using SaveViewState, as shown in Figure 5.



Figure 5 - Values are set for controls in OnPreRender. During the SaveViewState event, values are set for the ViewState object.

9. Render To HTML

The Render event commences the building of the page by assembling the HTML for output to the browser. During the Render event, the page calls on the objects to render themselves into HTML. The page then collects the HTML for delivery. When the Render event is overridden, the developer can write custom HTML to the browser that nullifies all the HTML the page has created thus far. The Render method takes an HtmlTextWriter object as a parameter and uses that to output HTML to be streamed to the browser. Changes can still be made at this point, but they are reflected to the client only. The Render event can be overridden, as shown in Figure 6 (below).

10. Disposal

After the page's HTML is rendered, the objects are disposed of. During the Dispose event, you should destroy any objects or references you have created in building the page. At this point, all processing has occurred and it is safe to dispose of any remaining objects, including the Page object. You can override Dispose, as shown in Figure 6.


Figure 6 - The Render event will output custom HTML to the browser through the HtmlTextWriter object.

Conclusion

Each time we request an ASP.NET page, we run through the same process from initialization to disposal. By understanding the inner workings of the ASP.NET page process, writing and debugging our code will be much easier and effective (not to mention less frustrating).

Thursday, May 24, 2007

MetaData

NET metadata, in the Microsoft .NET framework, refers to code that describes .NET CIL (Common Intermediate Language) code. A .NET language compiler will generate the metadata and store this in the assembly containing the CIL. Metadata describes all classes and class members that are defined in the assembly, and the classes and class members that the current assembly will call from another assembly. The metadata for a method contains the complete description of the method, including the class (and the assembly that contains the class), the return type and all of the method parameters. When the CLR executes CIL it will check to make sure that the metadata of the called method is the same as the metadata that is stored in the calling method. This ensures that a method can only be called with exactly the right number of parameters and exactly the right parameter types.

Attributes

Developers can add metadata to their code through attributes. There are two types of attributes, custom and pseudo custom attributes, and to the developer these have the same syntax. Attributes in code are messages to the compiler to generate metadata. A pseudo custom attribute is metadata that the CLR knows about, for example [Serializable] (which means that an instance of the class can be serialized). The 'pseudo' in pseudo custom attribute refers to the fact that the compiler will not use it to generate custom metadata; instead, it will generate metadata that the CLR knows about.

Example (C#):[Serializable]

public class MyClass
{
...
}

When the compiler sees a custom attribute it will generate custom metadata that is not recognised by the CLR. The developer has to provide code to read the metadata and act on it. For example, the Visual Studio property grid groups together properties of an object that is being viewed using categories; the class developer indicates the category for the object's class by applying the [Category] custom attribute. In this case it is application code — the property grid — that interprets the attribute, not the CLR.

How metadata is stored

Assemblies contain tables of metadata. These tables are described by the CIL specification. The metadata tables will have zero or more entries and the position of an entry determines its index. When CIL code uses metadata it does so through a metadata token. This is a 32-bit value where the top 8 bits identify the appropriate metadata table, and the remaining 24 bits give the index of the metadata in the table. The Framework SDK contains a sample called metainfo that will list the metadata tables in an assembly, however, this information is rarely of use to a developer.

Metadata in an assembly may be viewed using the ILDASM tool provided by the .NET Framework SDK.

Reflection

Reflection is the API used to read .NET metadata. The reflection API provides a logical view of metadata rather than the literal view provided by tools like metainfo. Reflection in version 1.1 of the .NET framework will allow you to inspect the descriptions of classes and their members, and it will allow you to execute code. However, version 1.1 will not allow you to get access to the CIL for a method. Version 2.0 of the framework will allow you to obtain the CIL for a method.

Besides the System.Reflection namespace, the following tools are available for reading .NET metadata and parsing IL:

PostSharp
Mono Cecil

Tuesday, May 22, 2007

Giving a .NET Assembly a Strong Name

You assign an assembly a strong name by associating the assembly with a pair of 1,024-bit cryptographic public and private keys. The actual process varies slightly, depending on whether the developer has access to the private key. In larger, security-oriented corporations, most developers do not have access to the private key. Instead, only a few members of a final QA or security team can access the private key.

In order to assign one or more assemblies a strong name, you must first create the 1,024-bit public and private key pair. You do this by running the Strong Name Utility (SN.EXE), like so:

sn.exe -k PublicPrivateKeyFile.snk

This randomly creates a pair of 1,024-bit cryptographic keys. You can use these keys for encryption and decryption by using the RSA public/private key algorithm. The resulting key file contains both the public and the private key. You can extract the public key from the file and place it in a separate file like this:

sn.exe -p PublicPrivateKeyFile.snk PublicKeyFile.snk

Typically, you will only perform the above steps once per corporation or division because all assemblies you produce can use the same public and private keys as long as the assemblies have unique friendly text names.

Next, you need to associate the 1,024-bit public key with an assembly. You do this by telling the compiler to read the contents of a key file, extract the public key from the key file, and place the public key into the definition of the assembly's identity. In effect, this makes the public key an extension of the friendly text name of the assembly. This also makes the assembly name globally unique because no other developer will be using the same 1,024-bit public key as part of their assemblies' name.

When you have access to the key file that contains both the public and the private key, you can associate the assembly with a public key and digitally sign the assembly (discussed later) in a single operation by including the following compiler metadata instructions in one of your assembly's source files.

// C#
using System.Reflection;

[assembly: AssemblyDelaySign(false)]
[assembly: AssemblyKeyFile("PublicPrivateKeyFile.snk")]

Alternatively, when you only have access to the key file that contains just the public key, you must enable delay signing of the assembly. Therefore, instead of the above attributes, you need to specify the following attributes.

// C#
using System.Reflection;

[assembly: AssemblyDelaySign(true)]
[assembly: AssemblyKeyFile("PublicKeyFile.snk")]

At this point, your assembly has a strong name. Further, if you specified that the compiler should not delay-sign the assembly (therefore, the compiler did sign the assembly), the assembly is also a valid assembly and you can load it, debug it, and generally use it as you wish.

However, if you specified that the compiler should delay-sign the assembly (therefore, the compiler did not sign the assembly), you will discover that the runtime considers the assembly to be invalid and will not load it or allow you to debug and run it.

When the compiler digitally signs an assembly, it calculates a cryptographic digest of the contents of the assembly. A cryptographic digest is a fancy hash of your assembly's file contents. Let's call this cryptographic digest the compile-time digest of the assembly. The compiler encrypts the compile-time digest using the 1,024-bit private key from your public-private key pair file. The compiler then stores this encrypted compile-time digest into the assembly. Note that this all happens during development.

Sometime later, whenever the .NET loader loads an assembly with a strong name, the loader itself calculates a cryptographic digest of the contents of the assembly. Let's call this digest the runtime digest of the assembly. The loader then extracts the encrypted compile-time digest from the assembly, extracts the public key for the assembly from the assembly itself, and uses the public key to decrypt the previously encrypted compile time digest. The loader then compares the calculated runtime digest to the decrypted compile-time digest. When they are not equal, something or someone has modified the assembly since you compiled it; therefore, the runtime fails the assembly load operation.

Note: Based on the above description, when you do not have access to the private key, you cannot encrypt the compile-time digest, thus cannot sign the assembly. In this case, you must delay-sign the assembly. Setting the delay signing option to true tells the compiler that you'll calculate, encrypt, and store the compile-time digest into the assembly at a later time. Because a delay-signed assembly does not contain a valid digital signature, the runtime will not load it.

However, developers without access to the private key need to be able to debug and test a delay-signed assembly. Therefore, such developers must inform the runtime that it should load an assembly even though it does not contain a valid digital signature.

You can instruct the runtime to skip the digital signature verification process for a particular assembly a specific system by using the following command:

sn.exe -Vr YourAssembly.dll

Technically, this instruction registers the specified assembly for digital signature verification skipping. It is a sticky setting in that the runtime never again verifies the digital signature for that assembly until you unregister it for verification skipping using the inverse command:

sn.exe -Vu YourAssembly.dll

A developer without access to the private key cannot digitally sign an assembly, so must register the assembly for verification skipping in order to debug and test the assembly.

Finally, once the developer determines that the assembly operates correctly, she hands off the assembly to the team that does have access to the private key. That team performs the delay signing operation on the assembly by using the following command:

sn.exe -R YourAssembly.dll PublicPrivateKeyFile.snk

The strong name utility computes the compile-time cryptographic digest for the assembly, encrypts the digest with the private key from the key file, and stores the encrypted digest into the assembly. The assembly will now load successfully on all systems.

Now, let's consider the effect of obfuscation on this process.

One .NET obfuscator, Demeanor for .NET, uses the .NET runtime to load the assembly it is obfuscating to insure that it obfuscates exactly those assemblies the .NET runtime will load when an application executes. Therefore, the assembly to be obfuscated must be loadable. This means that, in order for Demeanor for .NET to load an assembly, one of the following must be true.

1) It must contain a valid digital signature or,
2) You must have enabled digital signature verification skipping for the assembly on the obfuscating system.

Obfuscation modifies the assembly. This means that even when the assembly contains a valid digital signature before the obfuscation process (step one above), the assembly will not contain a valid digital signature after the obfuscation process.

Therefore, after obfuscating an assembly, you must either re-sign the assembly (which computes the proper digest, encrypts it, and stores it into the assembly), or you must only use the assembly on systems with digital signature verification skipping enabled, which isn't really a practical option except for developers debugging the obfuscated code.

Practically, this means that you end up using the delay-signing process when obfuscating an assembly with a strong name. The typical usage pattern for a developer using obfuscation is:

Build the assembly with delay signing enabled.
Enable strong name verification skipping for the assembly (sn.exe -Vr).
Debug and test the assembly.
Obfuscate the assembly.
Debug and test the obfuscated version.
Delay sign the assembly (sn.exe -R).

Alternatively, smaller shops that allow all developers access to the private key do the following:

Build the assembly with delay signing disabled.
Debug and test the assembly.
Obfuscate the assembly.
Delay sign the assembly (sn.exe -R).
Debug and test the obfuscated version.