Today I stumbled over the following: There are a circular references in the .NET Framework. For example, System.Xml references System.Configuration and System.Configuration references System.Xml. See the following:
There are a lot of that artifacts in the framework and I’m wondering why? And how does Microsoft manage to build this?
After reading a while ago about Google’s MapReduce design pattern for highly scalable applications, I now found some time to write my own small example implementation in C#. Ok, ok, this itself is not scalable at all, but it illustrates the concept. One can see that .NET’s generics and delegates are very useful.
Here’s the download: MapReduceCSharp.zip (7 KB)
(To run the unit test, you need to have NUnit installed)
Recently I’ve set up a Continuous Integration environment in our company for all our .NET source code. Previously we only had a daily build. Our C++ code still is on that basis and maybe we can convince the C++ guys that it is worth doing it (and I’m deeply sure that it is worth).
As basis for our continuous integration, I’ve used TeamCity. Alternatively you also could use CruiseControl.NET, but TeamCity altogether has more features – inclusive a nice AJAX based UI for all the administration and the monitoring. In CruiseControl.NET you have to do all the administration via config-files.
Before doing continuous integration, our whole build already was done via MSBUILD files. This really made it easy to integrate the whole build process in TeamCity, because it has built-in support for them.
One difficulty was, to make it work with StarTeam – our Version Control System. TeamCity currently (I used 1.2) has no built-in support for it. To fix that, I had to write some script code that does the check-out via StarTeam’s command line interface.
Automating the Unit Tests was quite easy. We wrote some NAnt files, which call the NUnit tests and integrated them in TeamCity.
Beside that, we also integrated fxCop for doing static code analysis using NAnt files. One drawback has TeamCity: It does not allow to include custom web pages in their UI. So I had to implement separate page for displaying the fxCop results. The same applies, if you want to use NCover or NDepend (I plan that for the next time).
Yesterday I had to troubleshoot a ASP.NET based application which ran out of memory. Other than I thought, this was quite difficult, because some of the available .NET profilers had problems doing that – maybe because of the size of the web application.
- dotTrace crashed when trying to memory profile. This is disappointing because it is really a nice tool for performance profiling.
- ANTS showed totally wrong results. Instead of more than 200 MB allocated memory, it only showed 8 MB.
- Only Skitech’s Memory Profiler worked and allowed me to browse through the allocated memory. Really nice.
This is a quite interesting read for everyone who is interested in how Google processes large amounts of data in their distributed environment.
I already read a lot about the Mono-Project which allows to develop and run .NET applications on Unix-compatible platforms (e.g. GNU/Linux, MAC OS X), but never tried it out. But now I thought, its time to check how well it works and how easy it is to write applications that run as well with Micrsoft .NET Framework and Mono.
I started with setting up a VMware with GNU/Linux (with Ubuntu) and all things needed to work with Mono: MonoDevelop, NUnit, NAnt, Subversion.
From time to time I’ll report about my experience with Mono. For today it’s enough.
In Visual Studio 2005 the built-in Web Site and Web Service Project is very restricted. For the ones who already worked with the predecessor projects in VS 2003 it is inscrutable why Microsoft replaced them with such “toy projects”. I do not know, why web projects should be much less configurable than all other projects. We used it in our project and it really was not easy to work with it and it was hard to integrate it bug free in the nightly build (by default it does not generate a binary, it has no post and pre build steps and it is hard to maintain the references).
Now it seems the Microsoft is alarmed about the problem. They are currently developing a VS 2005 Web Application that is not limited any more and has to look and file of the other VS 2005 projects. It works for web sites and web services.
A preview is available here: Visual Studio 2005 Web Application Project Preview
There is also a new project, that makes deployment of web applications easier: Visual Studio 2005 Web Deployment Projects (Beta V2 Preview)
Recently we had a short discussion on what character encoding should be used in XML: UTF-16 or UTF-8?
One thing that has to be mentioned first because it is mixed up often: UTF-8, as well as UTF-16 are Unicode. The difference lies in the coding of the characters. In UTF-8 Umlauts for example get coded (in the way #x1234), whereas in UTF-16 they are directly readable – e.g. <node>???</node>.
XML Parser have to implement support for UTF-8 as well as for UTF-16. That has nothing to do your Source Code is compiled in Unicode or ANSI. The correct conversion is the responsibility of the Parser.
Excerpt out of the W3C XML Standard:
Each external parsed entity in an XML document MAY use a different encoding for its characters. All XML processors MUST be able to read entities in both the UTF-8 and UTF-16 encodings. The terms “UTF-8″ and “UTF-16″ in this specification do not apply to character encodings with any other labels, even if the encodings or labels are very similar to UTF-8 or UTF-16.
Entities encoded in UTF-16 MUST and entities encoded in UTF-8 MAY begin with the Byte Order Mark described by Annex H of [ISO/IEC 10646:2000], section 2.4 of [Unicode], and section 2.7 of [Unicode3] (the ZERO WIDTH NO-BREAK SPACE character, #xFEFF). This is an encoding signature, not part of either the markup or the character data of the XML document. XML processors MUST be able to use this character to differentiate between UTF-8 and UTF-16 encoded documents.
I recommend to write XMLs strictly in UTF-8. The Advantages are described in following article: Encode your XML documents in UTF-8.
The advantages of UTF-8 in short:
- It offers broad tool support, including the best compatibility with legacy ASCII systems.
- It’s straightforward and efficient to process.
- It’s resistant to corruption.
- It’s platform neutral.”
And one thing always has to be sure: Never use codings other than UTF-8 or UTF-16, because their support is not mandatory for XML Parsers.
I wrote a small sample application to test MIDL for implementing applications communicating via Local Remote Procedure Calls (LRPC).
On Code Project there is an excellent example for doing RPC via TCP. LRPC instead directly uses the Runtime API. Because of that it is much more powerful.
In web pages it is impossible to display big lists like the desktop applications (e.g. Windows Explorer) do. You have to take attention of the limited bandwith and the rather slow rendering of the web browser. The solution is to devide the “big list” into pages with “sub lists”. This is how search engines like Google work.
There is an MSDN article which gives a rough overview of the technique behind.
Use SQL statements like these to realize paging:
SELECT TOP $$$PagesSize$$$ CustomerID,CompanyName,ContactName,
(SELECT TOP $$$CurrentPageNumber * PageSize$$$
Customers AS T1 ORDER BY ContactName DESC)
AS T2 ORDER BY ContactName ASC