Private constractors and static instance creators

Since .NET3 there is a significant change in how Microsoft,”partners” and open source projected have delivered their assemblies. Until .NET2 there was a lot of usage of constructors with parameters but the introduction of WPF,WCF,WF and LINQ prety much forced the definition of types with parametless constructors.

.NET even introduced Property setting just after the construction to help move to a parametless constructors world. Who doesn’t remember all these problems with the specific constructor for deserialization. Funny errors appeared when for any reason any object was serialized and deserialized inside an engine like Workflow Persistent service.

Personally I never felt comfortable with constructors with parameters. I have always been writing framework code with heavy use of delegates and dynamic code generation. Parameters in the constructions never helped assemblies with this kind of types. But there is always the case of requiring constructors with parameters. There is no official best practises approach for this matter but here is what I think about this.

I make the following distinctions on the whys and why nots for constructors overloading.

The don’t to it cases:

  1. A class instance is too volatile to leave its functionality to not properly and guided initialization. This can be solved in two ways. Either use a bunch of Guard methods inside every call or use static instance creators. The good thing about this approach is that the static creators are actually functions that can differ in signature and name. The name is very important because usually this kind of classes have serious variations based on the input parameters. In this case overloading the functions doesn’t help and proper naming guides the code and the developer better. Also different function names help a lot in a stack trace because you can clearly see the function name that crashed when the constructors by design all have the same name. Stack trace isn’t good at revealing for which of the overloaded functions or constructors were used, so you need always to be very careful when deciding to overload.
  2. The class can be really initialized with totally different ways. This means that the constructors will execute different code and probably won’t end up at a common code.This kind of cases usually reflect the wish of the developer to provide a helper to properly initialize his class for various different scenarios that he can think of. But in the spirit of code decoupling I strongly believe that these “helpers” should not be constructors but static functions probably with different names. This case resembles the first one and one can argue that is looking into a mirror from a different angle. The reason I differentiate this case, is because it is similar to most of the IO and XML type initializers that come with .NET. All of them target the same functionality but for different channels, like files, streams in memory streams and so forth. Most of the classes provide static instance creators overloaded or with different names depending on the situation. The reality is that in order for a class like this to get to the same state ready to be used different things need to happen with a variety of errors. For example a file can not exist. But end result is having an instance that you know is ready and stable to use.
The go for it cases:
  1.  The classic service initialization paradigm from MVC controlers and services examples. Here a class really needs to execute the same logic on some private members but it allows a different initial values for these fields. Common characteristic for such cases is that all constructors end up on the same code through the use of the this keyword. This means that more or less the class is constructed the same way because it ends up doing so with the same piece of code. The overloading of constructors here gives a flexibility but it never adds functional change. And always keeps the default constructor available.
  2. The most important one for me is if the class has a persistable state. If you feel that for any reason the class acts as a container for data then a default constructor is at least required. If the class could for any reason be required to be persisted in the future then default constructor becomes even more important. Even a service-like class should have a default constructor because even if it isn’t holding data it is holding a reference to something the provides functionality.These service-like classes can be easily initialized from engines like WCF and unit testing.

As a rule of thumb I would suggest that constructors with parameters should never be used as a means of converting a variety of external arguments to an internal set. The conversion should always exist outside the scope of the constructor for better error handling and code decoupling.

Additionally I believe that type instance initialization should be as fast as possible and any heavy operation (that has the potential to go wrong) should be delivered though some sort of function. Don’t forget that constructors cannot be unit tested.

To summarize, I would only use additional constructor to the default one for two major reasons.

  • Mock the provided that would normally be created. See MVC controller and service examples.
  • Provide some slight variations of the default constructor. This variation never changes the creation process and it is mandatory for all constructors to use the same piece of code. Unit testing shouldn’t differ and these variations should be applicable also by property setting, thus allowing same unit testing patterns. But setting properties on the same command as with the constructor really makes this option is becoming obsolete.

Identity Wars

Since 2007, I have grown great interest on the subject of identities on the web. This was because around 2007 a learned about facebook, saw Google really pushing her identity unification and learned about Cardspace through the release of Microsoft .NET 3. Ever since, I have been really interested in what is going on with these companies and why there are challenging each other so much on this matter.

All the above companies including Twitter are trying to make their users to use their credentials on every other site. Also all these companies are releasing software components that allow sites and applications to provide authentication functionality based on Facebook’s, Google’s and so forth credentials.

If you are interested on why these companies are making this effort and investment then keep reading.

First some introduction. For every security system there are three major key components.

  • Authentication. This process is about verifying a user through credentials as a valid identity for a specific system.
  • Authorization. This process is about enabling or disabling system’s functionality for every authenticated identity.
  • Auditing. This is about keeping track of the user-identity’s actions.

Most people see all three steps from the scope of one specific system. Best case scenario is that the system is compromised from a set of several applications, so Single Sign On would be a good functionality to have. This is what most user’s experienced with Google’s  security unification and Microsoft’s upgrade from Passport to Live ID.

The key thing is that all these companies have realized the importance of providing authentication not for one specific system, not even for a set of systems but for every application in the world. Google has branded in our minds the term search. Facebook and Twitter have done it for socializing. Think about how hard Google  tried with Wave and is trying now with Google+ to get a portion of facebook and twitter. Think also how hard is Microsoft trying to get a percentage from Google with Bing Services. Isn’t there a why?

Personally I think that in the world of globalization one thing has not yet been conquered or branded. Identities. And this is what is all about. Which company will put on people’s minds the synonym for Identity as their brand name.

All four companies plus Oracle and WordPress, as a fifth and sixth contenders, are currently fighting for the Identity brand by using different tools and different financing. Common denominator is the fact that all companies have provided toolkits that allow other applications to use their credentials as a security system. The difference is the different services they have to offer and how are they financially supporting their involvement on this war.

  • Google is using Google Search and Ad Sense to mostly finance them selves. They are using their auditing mechanism to better facilitate their advertisement service. Additionally they are providing a number of services to help you convince to sign up with their identity system. One thing currently missing is the social media sector, where Facebook and Twitter have still the upper hand. Lately Google is also trying to promote their services through the use of Android mobile platform.
  • Facebook and Twitter are using their social networks infrastructure to finance their business. Actually they are selling your’s, mine’s and everybody’s audited data to make money. 
  • Microsoft on the other was a bit late in this story. They are using their in premises products like Windows and Office to mostly finance themselves. Since Balmer took over, Microsoft is showing great interest for the cloud services and by trying to get the previous applications in a SaaS model. Common denominator in all Microsoft’s efforts is forcing the use of Live ID. They even created a mobile operating system to force people to sign up with Live ID and to use Bing services. They are even using Azure Services as a tool to further expand the usage of Live ID.

All these companies have the luxury  to invest on this effort by financing it indirectly by the same tools or by existing products. But all of them don’t like user’s to log on to these services with other credentials than their own. Microsoft and Facebook who share stocks are probably the only ones you have joined authentication on their systems. You may have seen this while using Live Messenger. Isn’t this a bit strange for companies that are trying so hard to convince other vendors and companies to use their identities but they do not want use other’s? They even gathered around to standardize the authentication process for their Restfull services.

The important thing you need to understand is that even if you are not using Facebook Identity to log on to an online application, Facebook knows about this indirectly  because of the Like and Share buttons. Browsers have the tendency of supplying cookie information for each host for every request and applications have the tendency to audit every request. So even by downloading an image from facebook while reading about a biking blog, Facebook knew about it. And Facebook is just an example

There are all kinds of protocols and token formats that provide for this process. Single Sign On (SSO) is the most known name because it is easy and it is targeting user’s ease. In reality SSO is legally masking what is going on behind the scenes. It is important to remember that Single Sign On is not only working for the application you are interested in but for all the rest also by the mechanism I described above. Most developers are now starting to realize about the tools available for SSO through names like Secure Token Service (STS) and Identity Provider(IP). NET developers will realize this more with a new feature that Visual Studio 11 is making available, which is actually a developer self hosted STS. But still these tools are mostly for in premises solutions but if you understand the mechanism then maybe you can start realizing what is going on.

Most companies want to increase their revenues by auditing what users are doing with their applications. What most of these companies don’t understand is that in order to do this you need the user to be registered with the user. All the companies that are fighting over identities realized at some point in the past, that although this may seem trivial, it is the single most important asset of their enterprise because with that they can audit every application. All companies that don’t realize this or are starting know, unfortunately they either need to start an effort in an already unfair and challenging war or succumb and get into an agreement with them to get a part of their audit data. Google already allows this with Google Analytics.

You cannot audit something without knowing the identity you are auditing for. This is why I personally believe that Identity is turning out to be the single most important asset in all web based companies. If you would compare it with real life, could anything work without government controlled identities? Now think about it for the WWW and you will start understanding the magnitude and significance of having like five companies providing Identities for all e-persons in the world. Think about thecomparison between the number of total goverments in the real world and the number of e-Identity providers on the virtual world. Is it less strange now the fact that until recently, companies thought as open are trying to keep governments like China happy? Besides revenue, would they risk in the long run having one billion out of six start using a different Identity Provider than their own? 

Windows Live ID and Country / Region

Currently I’m living in Greece but I’m in the process of moving abroad.

A few months back, I bought a Windows Phone 7 LG-E900 which I knew didn’t support Greek. I bought it, despite the huge problem, because it cost me only 200€ without a carier contract and because I knew when Mango would come, it would be worth the wait.

At the time, I was forced to create a new live ID, different from my msn, because I wanted access to the marketplace. At the creation process I chose UK as my country. Over the months, I have installed several applications and I also created some data collections.

Of coarse all this time, I couldn’t and still can’t sign in with zune, because my computers locale is different.

Now that Mango is almoste out there, I find my self in the possition to deal with this inexplicable limitation. To make matters worse, if I create a new live ID for Greece, and actually transfer my contacts and re-install applications,  I have to inform all my contacts about my new live ID, if I want to use MSN intergration

Because I’m in the process to migrate to another, I get discouraged to even think about doing these all over again for the new country’s specific live ID.

In the age of globalization and with a Europe that promotes rellocating, I can’t really understand why they are still imposing this limitations.

I’m starting to think that WP7 was a wrong choice. Don’t get me wrong, it is great but as with other, a something great can be destroyed from the inside, in our case by Microsoft’s decisions on the matter.

The Future of Hardware and Software in the IT industry.

On my latest post I wrote about the cloud, and how I believe that is really not something new, but a better way to provide efficiency.

Based on the Greek market, there are companies that profit from selling parts of or whole racks for data rooms. For years now this process has been something like, we require power we buy a combination of servers, storage units and networking. Greece, can different in many way from the rest of the world, but not on this case.

Only in huge data centers there is standardization. As the scale of the data room shrinks, so does the level of standarization in its implementaion, expansion and management.

For me this is about to change, because of the cloud. Cloud is many things and many people want to get on board with it, regardless what their product actually is. Personally, I think that cloud is not something uniquely defined. It is the revolutionary wave that is hitting the IT industry on all levels. Both hardware and software will be affected by this wave, and I trully believe that because of the virtuallization technology that drives this wave, nothing wil be the same in the bussiness in a few years.

Let me explain my thoughts. All software must run on physical or logical machine. But both these kinds require an actual psysical hardware implementation. Today. in the server bussiness, we are used to buy server models based on their manufacturer and based on some estimations about the software’s requirements. We also take into consideration clustering, availability and backup. In general, for a normal sized company that means, ordering the right kind of hardware at first and software at second. When something new comes along, that the processed is repeated!

Most of the companies, do not have knowledge for the above process, so there are companies that provide this kind of service. Whatever the reason or the method, big sellers of servers and networking components, have a network of resselers and partners that are responsible for choosing the right combination and making sure that on delivery everything will work, based always on the original requirements of the customer. These partners also have some sort of support and management aggreement with each customer, depending on the customer’s choice of outsourcing level.

Cloud will change entirely this process. Hardware will always be required, because reqardless of the virtualization technology at some level pysical hardware is needed to run all above logical levels. The biggest difference will be, that hardware will be measured not in specification such as kind of processor, storage unit but in simple units of measure such us processing power and storage size.

You may think at this point that there is nothing new here. But, based on the methods applied in todays datacenters and on the exa-data and exa-logic implementations, in the near future companies are just going to buy 10 of some power. Today’s Microsoft, Amazon’s and Google’s datacenters grow or shrink in containers, that have a specific configuration and are simply plugged in or out. Oracle’s exa – implementations do something similar. They come in querter,half, three quarters and a full implementation, and can be combined with each other just as a container is basically plugged in a datacenter. The resemblence is so obvious, that I expect traditional giants in the industry such as HP, Dell etc do to the same, if they haven’t already.

Keep in mind that with technology, whatever goes on projects that are huge in scale always spills of at the lower implementatios. This is done because the investment in research needs to at least cover its cost. For example as with Formula-1 car companies move their research into their production cars, so will companies behind those big data centers move their know how into smaller scale implementations.

The key difference with this approach will be that the middle man becomes seriously less required, at worst obsolete. When selling something that is easy to count as is with apples and oranges, then why not sell them, simply from an e-commerce site? Remember, besides some basic infrustructure, the delivered product will be something that will simply be plugged in in order to provide for example 10% more power. So hardware will still sell, maybe not as much but in a different form. But the key players, will stil be in the bussiness but with different products.

The pattern is already there on what I believe has been the cloud at its initial steps. Microsoft is not the first in the bussiness but the Provisioning service of Azure is an example, I can relate with. Do you want some extra power or storage for this month? Just click a new server role and all will be automatically done for you! In the case of local premises, that is a private cloud small or large, the same model will come to apply.

Provisioning on your hardware will defenitly minimize the cost for IT infrastrure, but more significanlty, it will allow local managers to just provision their available resources at will or an a schedule, through a simple UI interface. It will also allow the more efficiently management, of the various software applications running on the hardware platform. And if the department really needs some more of some “stuff”, they would simple buy it and plug it in.

That said, it must be understood that huge players in the server industry, most probably will slowly eliminate their need for their retailers at worst. At best, they will just minimize  the need for these partners or change the bussiness model. The only thing that cannot be replaced is the actual administration. People or Companies providing the administration of the hardware and software infrastructe will still be required, but in the hardware department most of the job will be done by the provisioning service of the data room.

Software will still be required until the day that some analogous method can be applied in the software production cycle. But, as the wave hits the software industry, sadly many companies will find their products obsolete by their multi-tenant, generic, cloud based counter parts. So, there may be more room for the software industry, but taking account that software changes more quickly than hardware, software products and bussiness model should also evolve.

One big difference with software is that, as it can change quicklier, it can also quicklier produce new ideas and provide solutions. Let’s not forget that software development will always be a process of creation and production. You just simple need to create the space for your bussiness.

Your choices are really going to be simplyfied. With public cloud you rent the resouces consumed but your data could probably be within the borders of another country (I talk about it on my previous post). With private cloud you regulate how your resources are allocated but your data is on local premises. Economically the difference is between renting and buying. From the it manager’s point of view, there is simply just resource management.

In conlusion, if you haven’t already started thinking about how will the cloud wave hit you and whether you are going to be at the bottom or the top of the wave, then don’t waste any time more not doing it. The changes will be dramatic for all players in this bussiness.

Cloud? Genuinely new or just a better level of efficiency

There has been a while since I last posted, on this blog but I would like to write my thoughts about this Cloud thing that has been buzzing the IT industry.

Recenlty, I watched two meetings/presentations held by Microsoft and Oracle. In both meetings there was this word, the cloud, that came up all the time. Both presentations, presented the cloud as the same in theory, but what the were actually proposing was entirely different, mostly because both companies were using the “cloud” as a marketing scheme for their products.

For me cloud is a buzz world that every one wants to hop on. It is a word that every one wants to somehow put on his product.Seriously, what exactly is new with the cloud? Don’t get me wrong, it will seriously change IT, but is it really something new? or is it just a better and more efficient way of existing services?

From the time I remember stuff about computers, there have been two major marketing efforts for something that it was not new but just better. In both these efforts, marketing succeeded in convincing us about the how much new was the subject beeing marketized, when it was just better or better presented. For technology oriented stuff, such a deception is not common.

The first of these campaigns was Windows95. Is seriously commercialized computers and internet. The second one is Cloud, and it seriously wants to “revolutionalize” our retalionship with data, if it hasn’t already been done by Google. Along with Cloud come three acronyms, SasS, PasS and IasS.

Isn’t the e-mail a basically a software endpoint provided to us from a remote provider? Isn’t hosting basically a platform service? Seriously, what is so new about the cloud that we have created around 37 definitions of it, as mentioned both by Microsoft and Oracle representatives? Is the SasS, PassS and IasS provided by datacenters better? Definitely, a hell of a lot better. Is it new? For me deffinetly no and there is a huge difference in this.

What was seriously new in this story? Virtualization! Well not exactly in this story, but virtualization was the technological condition that raises the efficiency of the previous mentioned services to another level. The difference is so significant, that the entire change can summarize the Cloud.

One category that prooves my point is the g-cloud or private cloud. However you look at it, it is still a datacenter. Banks and other large organizations have them for years. New technology, both software and hardware allow the more efficient utilization of the power, measured in flops, bytes and Watts. That doesn’t change the fact that it is still a datacenter. Is just has a management console that governs these resources dynamically and at will, thus more efficient. I’m not saying that is not huge. It is, but besides the virtualization driving all this wonderfull things, there is really nothing new.

For me the cloud is an immensly efficient way of providing it products as a service to a global market, both in the form ofhardware and software. There is nothing new as a product, rather it is now possible to marketize it at this scale.

For example, the presentation at Microsoft was reallly consumed around G-Cloud for the Greek government and how can Microsoft provide their knowhow from their Azure Datacenters as a service to the Greek Goverment. The Oracle’s presentation,  should have been better named Exa-something, because it was really a presentation about the Exadata and Exalogic platforms, and how you can use them for your private cloud. Public cloud, as Amazon,Google, Azure services were mentioned, only in order to provide validity to the attachment of the cloud world to these exa-products.

If someone should talk about the cloud, it should just be about public clouds, because private clouds are just more efficient databases still on premises. The public cloud, is something that can change how software, which is my area of exprertise can be sold and marketized. I will not talk about the benefits, such as cost or time to market, because everyone basically knows about them, but I will talk about a basic concern that I have, that nobody talks about. My concern is ethnical borders even in electronic data and the power that comes from its centralization.

If it is not understood until now, I’m a Greek citizen and live in Greece. Greece as many other counties does not have a cloud like datacenter, even of foreign ownership. If I was a US based company and chose to store my data and applications on a US based datacenter, besides security, networking and cost there would be nothing else to worry about. But for Greece it is different. Like it or not, there are countries and every country looks for some kind of advantage. Country and ethnic relations are not always stable, as history shows even with years scale. For example, if I was an owner of a Greek company, would I choose to store the entire value of my company, that is my data, on a Turkished based cloud provider? Would I choose to place my assets in a country or union that does not look at my country very friendly?

I do not want to get politicall, but network connection to a datacenter outside your countries borders is very liable. It can change for many reasons. A country can block all Ips originating from as specific country as a aggresion act or just as easily a country can block all outbound trafic to foreighn countries as move to control its population. Wars do not happen easily, but blackmails and hard bargains between countries do. It is just like wondering whether  it is wise to base your entire economy to a single source that is not yours to control. For example oil dependense.

On promises infrstracture can fail for various reasons. A cloud based datacenter, theoritically provides safety on these issues. But, if you are a non US company, wouldn’t you like to have those assets elsewhere, preferably in your country? But isn’t that contradictive to security and cost? Bottom line, in the worst case scenario it is up to you to fix the problem. The huge difference is that you assets are at your disposal.

I’m sure that many bussiness man will think just the profit and choose to take the risk. It is not something new, and history prooves that when money is involved all other just disapper. I’m referencing US datacenters despite the fact that are also non US located ones. But, if someone should look at the geographical locations of Azure datacenters, he will understand the difference between a US and non US intersts company.

As a citizen, I do not agree but my job as professional software engineer doesn’t really change whether the installation is located on the next room or somewhere in another country. Microsoft has done an excelent job with Azure, with giving the ability to produce products that can mostly ran on both enviroments.

Personally I think that cloud based installations will blossom, and I can perfectly understand that. I was to provide a not critical application as a web service, I would defenitly choose the Azure, since I am .net affiliated. The benefits are overwhelming, especially when this service would not define my survival, in case of thing going wrong. If a customer asks for my advice, I would still give an advice based on the advantages and disadvantages.

Bottom line it is the customers choice as a strategic design and I choose to support both ways regardless.

Windows Phone 7

I’m truly a Microsoft fan. I don’t like at all Apple’s attitude towards users that understand one or more things. I always thought that Windows Phone would be a mobile OS that would prove most of us Apple’s criticizers, that they were just being arrogant. Since the announcement of the platform I have noticed a Apple like marketing strategy much like the one with the first i-phone. I one of those that acknowledge the spectacular UI of iPhone but also criticize its lack of supporting enterprise applications because I can’t really accept the true need for a maximum 4in functionality. And 4in is not exactly mobile.

Today I came to be verified, because I went to a Dev Day for Windows Phone 7 in Greece. I’m underlying Greece, because we all know that the platform will not be full available at once in all countries.

There was a lady evangelist, that presented one of these many power points that are at their disposal to promote the product. At some point came the slide that presented the map with the countries that would have access to the marketplace (orange color) and the countries that would be be able to upload only (green color).

The slide could actually be used in a demotivator or a FAIL joke. The reason was that it launched a severe critic towards the globality of the device and the marketplace. During the pressing questioning that led to some people leaving the presentation, it became clear to the Greek community, that there would not be a Greek keyboard on the device and Greek consumers would not be able to use the marketplace, because the is not an option for a global marketplace. Even Apple has one. All innuendos were implied about a jail-break like solution or creating fake live ID or masking your IP, but the puts as exactly at the target of those you criticized Apple’s choices.

It also became clear that the presentation in fact, was done in order for Greek developers to develop applications for every other nation except Greece it self, which for me is somehow strange. I can’t argue that there is real money to be made from the Greek customer base, but consumer products such as a Smartphone tend to become streamline once they achieve a critical mass in the country. As an enterprise we can’t really sell an application to a customer, when his country is not supported both at the marketplace and with the OS itself.

During the questions, I made one regarding the enterprise target of the platform. I work for enterprise solutions and for my line of work, enterprise applications are critical for the platform. One of the things I hopped for the Windows Phone platform, would be that with a small learning curve one could produce enterprise grade applications in order to give an added value for the hole solution. Of coarse you need the consumer based users critical mass, to make the platform enterprise aware because at the end normal users will be the ones using the applications. But what I asked the question in hand, the answer was “no you cannot enter an application outside the marketplace”. I could accept that consumer applications could only be inserted on the device through the marketplace, but If I wanted to install a private enterprise application to all mobile handsets of a company there should be a another way.

Being the marketplace the only way to install an application on windows phone, makes the platform only a consumer device. They mentioned a coming feature of the marketplace for private enterprise applications, but there was no timeframe given.

Regarding the device, there won’t be multitasking something that I also can’t understand. Also the native browser won’t support not only Flash but also Silverlight. That means that all rich media Microsoft based sites won’t be accessible through the platform. When asked about it, the answer was we do not have an answer.

To summarize I say some really great things on the platform, and some really silly things that truly let me down for the platform. The good thing was the ease with which Microsoft knowhow could be used to create relatively similar applications, especially with XNA for gaming. The best part was the notifications part. Instead of background processing, one can ask for a cloud based service to do the work and when finished, receive a notification from the device that when pressed would launch our application. This was natively supported by the platform and at some point one could argue that is a better solution from background processing.

But the things that really let me down are:

  • No localized support. Can’t even send sms on native language.
  • No localized support for the marketplace.
  • No Flash or Silverlight support
  • No Copy Paste. How hard can it be? My Sony Ericson K700 (not a Smartphone) could do it

I firmly believe that Microsoft is copying Apple Marketing strategy, but there is a difference now. Apple gained a free advertizing through the buzz made about their methods, because at the time they presented a truly new approach to the mobile platform with great UI and a marketplace that was more than enough for the normal commercial user. At present even Apple has covered the ground for most of the critic it received and there is also Android platform that focuses on the functional side of the device and also it can be used to created enterprise grade applications. Microsoft copies an old strategy basing their strategy on some new features of which none are truly ground braking except notifications maybe. So on the Apple vs Microsoft side of the matter, for me there is nothing new and Microsoft is trying to win an already lost market. On this matter there was also the criticism made by indirectly Microsoft for Apple’s choices, that will blow on their faces because they really did not deliver anything of what they criticized. That left the enterprise market, which they also loose because by their own words and choices the platform is only commercial oriented.

It was really disappointing to see a potentially great platform to be just a copy of already wrong choices.

Implementing Typed Configuration

I posted an article on code project about an engine I have created, that gives the ability to easily view the various configuration sections with a typed manner.

The types that will represent each section drive the engine to read each section and populate instances with values.

On the article, there is source code for the engine and for various examples that demonstrate the above for various custom schemas in the configuration sections.

The article on code project can be found here

Setting Host name on SSL Binding on IIS7

Were I work, we have setup an iis that is going to serve all our various environments. Since we didn’t want to create various ports, we decided to add host name on the http binding of each site.

On of our sites though needed an secure logon page, so we came to this problem.

You cannot set host name on an ssl binding through Internet Information Services Manager

image 

While looking on the internet I came across solutions that depend on the appcmd tools located in C:\Windows\System32\inetsrv

While from this tool you can create an ssl binding with hostname, you cannot specify the ssl certificate. If you do so, then if you edit the ssl binding to add the cerificate the host name gets lost.

While looking at the help of appcmd I understood that there is a way to edit the binding, as long as it can be found. I did some test and yes, the host name was added on an existing ssl binding, while keeping the ssl certificate.

Because it is somehow tricky to edit the binding from the appcmd, I created a batch file than can be reused. The file contains on line

call C:\Windows\System32\Inetsrv\appcmd set site /site.name:%1 /bindings.[protocol='https',bindingInformation='*:443:'].bindingInformation:*:443:%2

where

  • %1 is the site name
  • %2 is the host name you want to add

If you would like some explaining, the first bindingInformation basicaly is a search filter that corresponds to a non declared host name and the second is the new value you want to set on the found one. The syntax is much like a dictionary search from C# and I can admit that once I understood it I liked it.

If the transaction is successful you will see a changed output. Most commonly if the filter is not right for whatever reason the appcmd will inform you that it could not find the specified binding

Finally the steps are to give an ssl binding host name are.

  1. Create the ssl binding with the certificate
  2. Run the above batch file with appropriated parameters as explained above
  3. Restart the site as various articles I read instructed, thought while I was playing I never needed it.
  4. Never edit the binding, because you will loose the host name. if you try to edit it, it won’t be displayed so don’t be alarmed as long as it is displayed in the bindings list

Quickly Build SQL Connection Strings

Today I run on a post that explained how to obtain a connection string to a data source using only the explorer and an editor.

The steps are three and trully simple

  1. Create anywhere a blank file with a .udl extension
  2. Double Click the file, and create the connection from within the usual UI. You can even test the connection
  3. View/Edit the file and you will acquire the connection string