On the Role of Assemblies
Any book on C# or .NET (there are over 9000 of them) states that an assembly is a unit of deployment. They also usually state that assemblies are versioned, and that CLR has a powerful system for searching for correct versions, with all the GACs, assembly redirects and other stuff. They say that the DLL hell is over and now we can finally replace an application assembly directly in production without breaking everything into pieces!
All this sounds just fine, but if you look around, it is easy to notice that breaking the system into solutions, projects and packages (NuGet packages) is usually based on a more complex algorithm. Here are some criteria.
The basic criterion of assembly creation is the possibility of an independent deployment of application components. This allows you to redeploy only a part of the application or create a compact update pack with the required change set.
Applicability: low for application code; average for reusable components.
Some code may require special privileges, or may be loaded into a separate domain or host with specific requirements.
If a module is used in different applications, it is reasonable to separate it into a different assembly, or maybe make it in the form of a package. This criterion is easily confused with the issues of independent deployment, but it is, nevertheless, different from it. Reusable code can be deployed independently of the main application code, but the motivation to extract the code into the assembly is still different.
Applicability: average. We still have not learned how to reuse the code properly!
Acceleration of the development process
It is possible that part of the team is working on one part of the application, and the other part of the team – on others. To facilitate communication, it is reasonable to separate individual modules, define their open interface and the protocol of interaction between them, and give them to the individual groups of developers (possibly even remote ones). This will reduce coupling between the modules, which will fit the natural loose coupling of teams.
Even if there is only one team working on the application, assemblies can be used to form the natural modules boundaries. From the OO point of view a module has all the key properties of a class: it has an open interface (abstraction) and a private part (implementation details). Such modules do not support inheritance, but it does not matter for this discussion. Assemblies/modules allow you to allocate large building blocks in the system which simplify the assembly of the finished system from large blocks, as well as allowing you to replace one block with another, and just simplify the understanding of a complex system.
Applicability: above average.
Cohesion/coupling and other scary stuff
Fundamental concepts of software design such as low coupling, high cohesion, and protected variation are applicable at both the class level and at the level of modules or subsystems.
Physical grouping of classes into the assembly allows you to precisely control outgoing assembly connections and to limit the number of incoming connections. If a single module (in OO terminology) is highly cohesive but is developed at different speed rates, it may be a good enough reason to break it into two assemblies – a stable and unstable one (now some customers will use a stable one and will not be subject to constant change). If there is a high probability of change of one implementation to another, it is possible to isolate the interface assembly and dynamically load an assembly with implementation.
Applicability: above average.
The notion of class in OOP is pretty clear, but ideas about the higher-level abstractions are rather vague. We have to model abstract concepts of modules and packages by using assemblies and namespaces. There is no unambiguous correlation between them, and this may be the problem. Modules and packages have an open interface and a private part that exists only at the assembly level, but there is no such thing at namespace level.
Breaking the system into modules is a quite complex iterative process. Selecting assemblies is a similar process. Assemblies were originally designed for implementing components and for hot-swapping them during run-time, but I consider them primarily as a tool for obtaining modular structure.
The number of assemblies should be adequate (the lowest possible), and is determined by the importance of each criteria listed above for the current team at a specific moment in time. This means that the modularity of the system should be reviewed periodically, as some criteria may fade into the background, while others become more important.
For me personally, the last three criteria are the most important. The aspect of independent deployment is reduced almost to zero. The presence of Continuous Delivery is doing its job. Very few people now deploy individual parts of the application. You can just press one button, rebuild the whole MSI and upload a new version. Or press a button and upload a new Web role to Azure entirely. It makes little sense to upload only one assembly. You still have to perform all the tests, and making this by substituting a single assembly is usually more difficult than by making a full run on the build server. An application can move to the new version of the package (especially if the package is provided by a third party), but even in this case it is easier to rebuild and upload the entire application. Separating the pieces of your application by packages is usually quite time consuming if we are not talking about widely used basic components.
I am not saying that the aspect of independent deployment is not valid at all, but, IMHO, already fell by the wayside.
The correct use of a tool is not defined by the canons. If your current approach for splitting assemblies simplifies deployment, accelerates the development and reuse – you are on the right track. If, however, it adds more problems than it solves, then you need to review the current approach.
Expert in .Net, Ñ++ and Application Architecture