Optimizing Visual Studio solutions and projects or how good intentions can pave a lot of road
I've had a horrible week. It all started with a good Scrum sprint (or so I thought) followed by a period of quiet in which I could concentrate on my own ideas. And one of my ideas was to optimize the structure of the solution we work on, containing 48 projects, in order to save space and compilation time. In my eyes, I was a hero, considering that for a company with tens to hundreds of devs, even a one second increase in speed would be important. So, I set up doing that.
Of course, the sprint was not as good as I had imagined. A single stored procedure led to not less than four bugs in production, with me being to blame for them all. People lost more time working on reproducing the bugs, deploying the fix, code reviewing, etc. At long last I thought I was done with it and I could show everyone how great the solution looked now (on my computer) and atone for my sins.
So from a solution that spanned from 700Mb clean and 4Gb after compilation, I managed to get it to a maximum of 1.4Gb. In fact, it was so small I could put it all in a Ram disk, leading to enormous speeds. In comparison, a normal drive goes to about 30MB per second, an SSD drive (without encryption) goes to about 250MB/s, while my RamDisk was running at a whooping 3.6GB/s. That sped up the compilation and parsing of files. Moreover, I had discovered that MsBuild has this /m parameter that makes it use more processors. A compilation would go to about 40 seconds, down from two minutes and a half. Great! Alas, it was not to be so easy.
First of all, the steps I was considering were simple:
At first I was amazed the solution decreased in size so much and I just placed the entirety of it into a RAM drive. This fixed some of the issues with Visual Studio, because when I was selecting a file through a symlink to add as a reference, it would resolve to the target folder instead of the name of the symlink. And it was't easy either. Imagine removing all project references and replacing them with dll references for 48 projects. It took forever.
Finally I had the glorious compilation. Speed, power, size, no warnings either (since I also worked on that) and a few bug fixes thrown in there for good measure. I was a god! Then the problems appeared.
Problem 1: I had finished the previous sprint with a buggy stored procedure committed to production. Clients were losing money and complaining. That put a serious dent in my pride, especially since there were multiple problems coming from both less attention to how I wrote the code to downright lack of knowledge of the flow of the application. For the last part I am not really the only one to blame, but it was my responsibility.
Problem 2: The application was throwing some errors about the target framework of a dll. It was enough to make me understand a major flaw in my design: there were .Net 3.5 and .Net 4.0 assemblies in the solution and placing them all in the same output folder would break some build scripts. Even worse, the 8 web projects in the solution needed to have their output in the bin folder, so that IIS would find them. Fixed it only to see the size of the solution rise back to 3Gb.
Problem 3: Visual Studio would not be so smart as to understand that if a project is loaded, going to the declaration of a member in the compiled assembly means I want to see the actual source, not the IL code. Well, sometime it worked, but sometimes it didn't. As a result I restored the project references instead of the assembly references.
Problem 4: the MsBuild /m flag would do wonders on my machine, but it would not do much on the build server. Nor would it do its magic on slower, less multiprocessor computers than my own.
Problem 5: Facing a flood of problems coming from me, my colleagues lost faith and decided to not even try the modifications that removed the compilation warnings from the solution.
Conclusion: The build went marginally faster, but not enough to justify a whole week of work on it. The size decreased by 25%, making it feasible to put it all in a RAM Drive, so that was great, to the detriment of working memory. I still have to see if that is a good or a bad thing. The multiprocessor hacks didn't do much, the warnings are still there and even some of my bug fixes were problematic because someone else also worked on them and didn't tell anyone. All in a week's work.
Things I have learned from all this: Baby steps. When I feel enthusiasm, I must take it as a sign of trouble. I must be dispassionate as an ice cube and think things through. If I am working on a branch, integrate the trunk into it every day, so as to not make it harder to do at the end. When doing something, do it from start to finish, no matter what horrors I see while doing it. Move away from Sodom and not look back at it. Someone else will fix that, maybe, you just do your task well. When finishing something, commit it into the source control so it can easily be reverted through a single atomic operation.
It is difficult to me to adjust to something that involves this amount of planning and focus. I feel as if the chaotic development years of my youth were somewhat better, even if at the time I felt that it was stupid and focus and planning was needed. As a good Romanian, I am neurotic enough to see the worst side of everything, master at complaining about it, but incapable of actually doing something. Yeah... this was a bad week.
Of course, the sprint was not as good as I had imagined. A single stored procedure led to not less than four bugs in production, with me being to blame for them all. People lost more time working on reproducing the bugs, deploying the fix, code reviewing, etc. At long last I thought I was done with it and I could show everyone how great the solution looked now (on my computer) and atone for my sins.
So from a solution that spanned from 700Mb clean and 4Gb after compilation, I managed to get it to a maximum of 1.4Gb. In fact, it was so small I could put it all in a Ram disk, leading to enormous speeds. In comparison, a normal drive goes to about 30MB per second, an SSD drive (without encryption) goes to about 250MB/s, while my RamDisk was running at a whooping 3.6GB/s. That sped up the compilation and parsing of files. Moreover, I had discovered that MsBuild has this /m parameter that makes it use more processors. A compilation would go to about 40 seconds, down from two minutes and a half. Great! Alas, it was not to be so easy.
First of all, the steps I was considering were simple:
- Take all projects and make them have a single output folder. That would decrease the size of the solution since there would be no copies of the .dll files, Then the sheer speed of the compilation would have to increase, since there would be less copying and less compilation.
- More importantly, I was considering making a symlink to a RAM drive and using it instead of the destination folder.
- Another step I was considering was making all references to the dll files in the output folder, not to the projects, allowing for projects to be opened independently.
At first I was amazed the solution decreased in size so much and I just placed the entirety of it into a RAM drive. This fixed some of the issues with Visual Studio, because when I was selecting a file through a symlink to add as a reference, it would resolve to the target folder instead of the name of the symlink. And it was't easy either. Imagine removing all project references and replacing them with dll references for 48 projects. It took forever.
Finally I had the glorious compilation. Speed, power, size, no warnings either (since I also worked on that) and a few bug fixes thrown in there for good measure. I was a god! Then the problems appeared.
Problem 1: I had finished the previous sprint with a buggy stored procedure committed to production. Clients were losing money and complaining. That put a serious dent in my pride, especially since there were multiple problems coming from both less attention to how I wrote the code to downright lack of knowledge of the flow of the application. For the last part I am not really the only one to blame, but it was my responsibility.
Problem 2: The application was throwing some errors about the target framework of a dll. It was enough to make me understand a major flaw in my design: there were .Net 3.5 and .Net 4.0 assemblies in the solution and placing them all in the same output folder would break some build scripts. Even worse, the 8 web projects in the solution needed to have their output in the bin folder, so that IIS would find them. Fixed it only to see the size of the solution rise back to 3Gb.
Problem 3: Visual Studio would not be so smart as to understand that if a project is loaded, going to the declaration of a member in the compiled assembly means I want to see the actual source, not the IL code. Well, sometime it worked, but sometimes it didn't. As a result I restored the project references instead of the assembly references.
Problem 4: the MsBuild /m flag would do wonders on my machine, but it would not do much on the build server. Nor would it do its magic on slower, less multiprocessor computers than my own.
Problem 5: Facing a flood of problems coming from me, my colleagues lost faith and decided to not even try the modifications that removed the compilation warnings from the solution.
Conclusion: The build went marginally faster, but not enough to justify a whole week of work on it. The size decreased by 25%, making it feasible to put it all in a RAM Drive, so that was great, to the detriment of working memory. I still have to see if that is a good or a bad thing. The multiprocessor hacks didn't do much, the warnings are still there and even some of my bug fixes were problematic because someone else also worked on them and didn't tell anyone. All in a week's work.
Things I have learned from all this: Baby steps. When I feel enthusiasm, I must take it as a sign of trouble. I must be dispassionate as an ice cube and think things through. If I am working on a branch, integrate the trunk into it every day, so as to not make it harder to do at the end. When doing something, do it from start to finish, no matter what horrors I see while doing it. Move away from Sodom and not look back at it. Someone else will fix that, maybe, you just do your task well. When finishing something, commit it into the source control so it can easily be reverted through a single atomic operation.
It is difficult to me to adjust to something that involves this amount of planning and focus. I feel as if the chaotic development years of my youth were somewhat better, even if at the time I felt that it was stupid and focus and planning was needed. As a good Romanian, I am neurotic enough to see the worst side of everything, master at complaining about it, but incapable of actually doing something. Yeah... this was a bad week.
Comments
Be the first to post a comment