Click here to Skip to main content
15,946,342 members
Articles / Programming Languages / C#

Patterns and Frameworks - What's Wrong?

Rate me:
Please Sign up or sign in to vote.
5.00/5 (18 votes)
9 Aug 2022CPOL21 min read 16.9K   14   7
Are you spending most of your time just writing code to "glue" your components together? Let's change that!
Have you ever used a Framework that looked great and gave the impression it would save you lots of time in writing repetitive code but, when you put it to real use, you were spending more time trying to overcome its limitations than if you didn't have a framework at all?

Before Starting

This article actually became bigger than I wanted, which made me postpone it for a while... yet, I am not sure how to correctly fix it to be the perfect article size, so I am publishing it like this anyways. I hope it is good enough for most readers.

Most of this article focuses on patterns, frameworks that try to solve those patterns, and the problems that badly conceived frameworks cause, including creating new patterns.

But if you don't care about that, maybe you just want to skip to the end of the article, where I talk about a different problem: When the frameworks actually do a good job when used correctly, but developers just keep sub-utilizing their potential, creating annoying and error prone patterns by lack of planning or understanding of the frameworks in use.

OOP (Object Oriented Programming) and Frameworks

Most developers today know the basics of Object Oriented Programming (known as OOP). It is the basic paradigm for some languages (like C++, C# and Java) and it is taught in most programming classes.

If you use OOP principles, you might be inclined to create new classes to represent... well, almost everything. If we have a repetitive problem, we will naturally think about new functions, new methods and, when the problem is complex enough, new classes to deal with the problem.

That works very well when we have the control of all the classes and I always saw creating them as a way to avoid repetitive code or as a way to make it more resistant to mistakes. That's why I would have types like PublicBinaryKey and PrivateBinaryKey in an encryption library instead of just using byte[]. Those two types will contain a single byte[] as their content, yet users will need to explicitly initialize them to have either one or the other.

That helps us avoid basic mistakes like calling Encrypt(value, privateKey, publicKey); when the correct order of the arguments is actually privateKey, publicKey and value.

Without knowing the Encrypt() API (and while using most code reviewing tools) I will either assume things were passed in the right order or I would need to get the code to a local branch, to then go after the method definition or use IntelliSense, to discover if the call was right.

But, if the call looked like the following, I would not need to look anywhere else. Also, I doubt anybody in their right mind would write code like this:

C#
Encrypt(new PrivateBinaryKey(value), new PublicBinaryKey(privateKey), publicKey);

While code that looks more correct, like the following, would cause a compile-time error because the arguments are in the wrong order:

C#
Encrypt(value, new PrivateBinaryKey(privateKey), new PublicBinaryKey(publicKey));

So, in an OOP world, having new types like PrivateBinaryKey, PublicBinaryKey and the like (also called Semantic Types) can help us make the code easier to understand and less error prone, as just passing things in the wrong order will either look weird to us, or will cause compile-time errors.

But, will you do this if the frameworks you are using don't support your own application specific types? Or will you just use byte[] on the three arguments and just say it's the caller responsibility to pass the arguments in the correct order?

In fact, how much will your basic code change if you want to use your types with a particular framework? Will you change your types to inherit from a framework specific class, or to include a framework specific attribute, like [Serializable]? Will you ignore the framework altogether? Or would you create a pattern to be able to use the framework in a "maintainable" way?

Introduction

Frameworks exist to solve a "pattern problem". By pattern, I am talking about any situation where we need to repeat ourselves and our code, possibly with a minor change for every case.

We have patterns for many things. Have you ever written a class that implements INotifyPropertyChanged? Every property set will need to invoke the PropertyChanged event. Every property has the same pattern in which we probably validate if the value really changed or not and, if it did, then set the backing field and invoke the event. The pattern itself can grow if we decide to cache the PropertyChangedEventArgs instead of creating new instances every time.

Well, our focus is not on that particular pattern. It is on patterns in general, and most of the Frameworks exist to solve those patterns, allowing us to avoid the repetitive work.

The problem is that badly conceived frameworks end-up forcing us to create another pattern to overcome their limitations. Depending on how bad those limitations are, we might end-up with more code than the original pattern, and maybe even "hitting walls" if those limitations are really bad.

So, this article is all about identifying patterns, the frameworks that help us solve those patterns, but also exploring the new patterns that may appear because of such frameworks, and evaluating what else can be done.

Simple Example

I want to start with a simple example, and only later talk about more complex cases.

So, have you ever used a "simple (web-)service communication" library?

Those usually have something like a single static Call() method where we pass the service address, method name we want to invoke, followed by all the arguments the remote method might need.

For example, if we had a service like this:

C#
public static class MathService
{
  public static int Add(int x, int y)
  {
    return x + y;
  }

  public static int Subtract(int x, int y)
  {
    return x - y;
  }
}

We could invoke the Add and Subtract calls by doing something like:

C#
int result1 = (int)RemoteService.Call("http://SomeAddress.com/MathService", "Add", 1, 2);
int result2 = (int)RemoteService.Call("http://SomeAddress.com/MathService", 
               "Subtract", 1, 2);

The service call is not extremely bad, but it is far from great either.

Notice that those calls are not-compile time checked (I can mistype the method name and could pass any number of arguments in the call, and it would still compile). Also, those calls are "longer versions" of the more direct calls:

C#
int result1 = MathService.Add(1, 2);
int result2 = MathService.Subtract(1, 2);

Talking about simplicity and safety, we probably don't want to have to reference the RemoteService directly, pass method names by string (which means we might mistype them and we don't get any IntelliSense support) and cast results.

So, we might develop a pattern where, on the client side, we have a "service client" class.

That is, we might have something like:

C#
public sealed class MathServiceClient
{
  private readonly string _remoteAddress;

  public MathServiceClient(string remoteAddress)
  {
    if (string.IsNullOrEmpty(remoteAddress))
      throw new ArgumentException(nameof(remoteAddress));

    _remoteAddress = remoteAddress;
  }

  public int Add(int x, int y)
  {
    return (int)RemoteService.Call(_remoteAddress, "Add", x, y);
  }

  public int Subtract(int x, int y)
  {
    return (int)RemoteService.Call(_remoteAddress, "Subtract", x, y);
  }
}

And then, as long as we don't mess up this client class, we only need to instantiate it once on the app, maybe with a code similar to this:

C#
var service = new MathServiceClient("http://SomeAddress.com/MathSampleService");

And we are free to invoke the service in a straightforward way as many times as needed:

C#
int result1 = service.Add(1, 2);
int result2 = service.Subtract(1, 2);

Also, by just writing service., we can get the list of available methods. We will also have compile-time errors if we mistype the method name, pass the wrong number of arguments or just the wrong type of arguments.

The Pattern

The pattern right now is to have a client class with similar method signatures as the service one, and implement such a class to just invoke the RemoteService.Call() method, passing the right method name and doing casts on the results if necessary.

We can still mess-up method names and argument types when creating the client class but, once such a class is fixed, users of the class will have a much better API.

But, wouldn't it be great if we could avoid the pattern and have something "automatic"?

Transparent Proxy Frameworks

Most communication frameworks use the concept of transparent proxies, effectively eliminating the need to implement "client classes" manually. Although we need to change some things (like creating an interface and making the static class become a singleton), on the client side, we will just use a transparent proxy (that is, an object that implements the interface and calls the service, effectively being an auto-implementation of the MathServiceClient).

Honestly, having the interface isn't really a problem to me, as I think the best architectures will use an interface even if remote communication is not needed. Also, we have compile-time validation both when using the interface and when implementing it on the server side, as a difference on the method signatures will definitely be noticed by the compiler.

The issue is that such an apparent solution to the problem only works because most examples use only primitive types and strings, and there is no need to interact with any other frameworks. But, what will happen if we have more complex types and need to use two or more frameworks? Do they work well together?

Two Frameworks Together and Framework Agnostic Types

The problem of most frameworks is that the documentation show us a simple problem, that the framework actually solves very well, but they never tell us what to do to deal with the complex problems. What's worse, many times they simply cannot deal with the complex problems and, to overcome their limitations, we end-up having to rewrite a big chunk of our code to actually ignore the framework, making the complex problem even more complex. Is it worth simplifying the simple cases, but to make the complex cases even more complex?

So, the problem I am going to explore now involves two frameworks. The new example is to implement the following service interface (and expose it as an actual web-service), using a normal SQL Database as storage. Changing the existing types is not allowed at this moment.

C#
public interface IUserManagement
{
  User AddUser(UserName name, EmailString email);
  IEnumerable<UserId> EnumerateUserIds(Expression<Func<User, bool>> filter);
  User LoadUser(UserId id);
  void DeleteUser(UserId id);
}
// Assume that we are using nullable/non-nullable references,
// which means that all the parameters are non-null.

The types UserName and EmailString are semantic types. They validate their contents during creation and that means if we have a UserName or an EmailString, we know for sure that those were already validated and we don't need to validate them again.

UserId is also a semantic type, but it is simpler and its only content is a Guid. The purpose of the UserId type is to avoid using Guids directly, making things less error prone when dealing with the Ids of the many different tables that possibly exist in the system.

And, for the article's brevity, User is declared like this:

C#
public sealed class User
{
  public UserId Id { get; init; }
  public UserName Name { get; init; }
  public EmailString Email { get; init; }
}

What I Expect

In a somewhat ideal world, I expect the service to be implemented by something as simple as this:

C#
public sealed class UserManagement:
  IUserManagement
{
  public User AddUser(UserName name, EmailString email)
  {
    var user = new User { Id = UserId.New(), Name = name, Email=email };
    DatabaseFramework.Insert(user);
    return user;
  }

  public IEnumerable<UserId> EnumerateUserIds(Expression<Func<User, bool>> filter)
  {
    return DatabaseFramework.Query<User>(filter).Select(user => user.Id);
  }

  public User GetUser(UserId id)
  {
    return DatabaseFramework.Query<User>(user => user.Id == id);
  }

  public void DeleteUser(UserId id)
  {
    DatabaseFramework.Delete<User>(user => user.Id == id);
  }
}

Then, I simply do a call like:

C#
CommunicationFramework.RegisterService<IUserManagement>(new UserManagement());

And the service would be up and running.

That's not the World We Live In

Unfortunately, most database frameworks simply cannot deal with types like UserId, UserName and EmailString. Others can, as long as those types are changed to follow framework specific rules, be it by implementing interfaces, using attributes or, much more limiting, by inheriting from their base classes, which means an object cannot work in two different frameworks that expect their framework-specific base classes.

But, remember the condition, we cannot change the existing types.

The communication frameworks face a similar problem. They might require attributes like [Serializable], [DataContract] or others on every existing type, and none of the used types have those attributes. The server class might need to sub-class MarshalByRefObject to be able to be exposed remotely. This one is particularly funny as I said that was not needed and got an answer from Microsoft that "and how else could you set a Lifetime for it?" when I think: We can use defaults, we can allow transparent decorations and the like. Anyways, the problem is there.

Going one step further, some communication frameworks cannot even serialize a Guid. So, what should we do?

Most Common Solution - Create a New Pattern

The most basic solution, for the database table, would be to:

  • Have another User class (let's call it DatabaseUser) that follows the database framework rules, like having [PrimaryKey] on the Id, and also using only database-supported types
  • Copy a User to a DatabaseUser, doing the appropriate conversion on each one of the properties when inserting a record. There's a chance we will be doing dataseUser.Id = user.Id.Value.ToString() if the database doesn't support GUIDs
  • Copy a DatabaseUser to a User when reading the User table

For the service communication side, we will probably have to:

  • Create an interface that represents the service using the communication framework rules (similar to the database framework, having the attributes necessary like [Serializable], [DataContract] and also using simpler data-types if needed)
  • Create an adapter class on the server-side that implements the new interface, converts the arguments and call the original service, also converting back the results
  • Create a similar adapter on the client side, now implementing the original interface and converting arguments to the communication framework types, invoking the service, and converting the results back.

And, although we only have one service and one table, that's the pattern. For every new table, we follow the database pattern. For every new service, we follow the communication framework pattern.

If every table will have a dedicated service, that just means we have a bigger pattern where for each table there is also a service, but the pattern for the table and for the service is there.

So, we went from:

  • Original service interface (IUserManagement)
  • One service class (UserManagement)
  • One "table-like" class (User)

to:

  • Original service interface (IUserManagement)
  • Original service class (UserManagement)
  • Table-like class (User)
  • Database framework compliant table-class (DatabaseUser)
  • Conversion from User to DatabaseUser
  • Conversion from DatabaseUser to User
  • Communication framework compliant service interface (IUserManagementService, maybe?)
  • Adapter from communication framework to actual service (converting both input arguments and results, let's call it ExportedUserManagementService)
  • Adapter from original client interface to invoke the communication framework interface (let's call it UserManagementClient)
  • We will probably need a communication specific user class (SerializableUser), as we don't want to mix DatabaseUser with communication/serialization specific attributes, especially because the client side doesn't need to know anything about the DatabaseUser class.

We went from three items to nine or ten, where, aside from the implementation of the service itself that might have varying degrees of complexity, all the new code is probably more complex than the original interface and table-like class for no gain at all.

Is that right?

It is similar to what usually happens, but I know for sure it shouldn't be like that.

In fact, I can actually see things deviating a little. Instead of having a service on the original style and an adapter to expose it as a service, the service class will probably change to follow the communication framework requirements and limitations (adding attributes, inheriting from MarshalByRefObject, using simpler types, etc). Many people say that's the best solution, as having a class on its original style and an adapter is over-engineering... but, guess what happens if you need to expose a service by two similar (but not compatible) communication frameworks? If you guessed a pattern that involves adapters, you guessed right!

If the frameworks were planned accordingly, we could just use the "planned types" and, if needed, tell the frameworks how to deal with the semantic types (without changing them) or what property is the database key without having to change the existing type.

If that was the case, we will keep the three original items, and would just add conversions for each one of the semantic types once, not at every place they are used, and will also not have to worry about entire new classes just to replace the semantic types by the simpler ones (like the DatabaseUser, SerializableUser, UserManagementClient and the like) and having to copy/convert all the property values.

Don't Use Semantic Types

I know some people would argue that I am only seeing issues because I used semantic types and I said we cannot change the existing types. If I didn't use use semantic types, everything would be better, right?

Well, I definitely used semantic types to "exacerbate" the problem, but that's a problem I see many projects suffer from.

At some point, the code becomes a giant mess of adapters and patterns just to make objects of one framework "talk" to another framework. The use of semantic types just exposes another problem I wanted to talk about: Many projects will start to use only the primitive types and strings, avoiding even GUIDs, if just one of the used frameworks can't deal with them, which also hinders the code-readability and is more prone to bugs (after all, a Guid is a Guid, a string can contain any non-Guid convertible text).

I am not trying to say the frameworks are just bad, but most of them really do lack simple things that could make them much better team players and, by team players, I mean to be able to work with existing objects without requiring them to change to be sub-classes of the framework types, implement specific interfaces or similar.

In any case, what if I hadn't used semantic types in my example? Would all the problems be gone?

The answer is no. To explain, I never talked about the fact that the filter in EnumerateUserIds is not serializable be it by .NET Remoting, WCF, gRPC or other frameworks. That has nothing to do with semantic-typing.

Also, if we want to avoid adapters, we will need to change the service types because:

  • To expose the service by WCF, the interface would need to include [ServiceContract] and [OperationContract] attributes
  • To expose the service by .NET Remoting, the service class would need to change to inherit from MarshalByRefObject
  • gRPC starts with proto files and will generate its own communication objects, which need to be filled by hand, it cannot deal with GUIDs, which means the conversion to Guid will be need to be done manually at some point. Maybe we will need to create those "client classes" by hand if we still want to present an easy to use API to our "service consumers".

And I am still not focusing on the fact that we also need a database library or framework, and ideally we should not put any database-specific stuff on the communication types as the client side is not supposed to know anything about how the server-side is implemented.

Why Do We Use Frameworks?

I said at the beginning of the article that Frameworks exist to solve a pattern. Yet, many times, we use frameworks just because we need something to do the work and we find a framework.

In this case, we were not looking to write "less code and avoid patterns". We just wanted to get the job done.

Many people say that we should prefer libraries to frameworks and I must say that might be true. If a library could do the job you need (for example, the communication between a client and a server I presented at the beginning of the article) and meets the performance, security, protocols and any other requirements your service needs to support, then the fact we can have a "Framework" automate some class generations for us might be of minor importance, especially if such a Framework will not be able to deal with the application classes and would force us to keep creating new patterns just to convert object types.

Looking at it differently, most frameworks use transparent-proxies to allow us to focus on our main logic instead of focusing on creating the proxies by hand, as the required pattern might be prone to copy-paste errors or be just too big.

Yet, they fall short when dealing with app specific types and force us to either change our types just to be able to work with the framework or, when two or more frameworks are involved, there's no way out and we need to create alternative classes and copy data back and forth, doing the necessary conversions, which completely defeats the purpose of having the transparent proxies.

Is There a Solution?

That depends, both on the expectations and personal preferences.

I would say the solution is planning. But, to me, that usually means I will create my own frameworks knowing I will not change any application types in any way to adapt to the frameworks, neither will create adapter classes, as I see those as a waste of time and resources (both from the developer side as well as from the computing side), so I make the Frameworks capable of working with existing types that are in assemblies that don't even have access to the framework types.

That is, I can literally put a System.Drawing.Bitmap as a property on my objects and make the communication framework use it, even though it doesn't have the [Serializable] attribute, make the database/ORM framework use that property, and correctly save the contents in a Blob (and even generate the database creation script, if needed).

I also understand we cannot create everything from scratch, so the best answer might be: Choose wisely.

It is not because a framework seems very simple to use and automates a simple task that it is actually going to be of real use to avoid repetitive code or will really solve our problems.

I can give as example many ORMs. People say: "With the ORM, I don't need to write queries. I just fill an object and ask the ORM to do the insert... and when reading, the ORM just fills in my objects".

That would be great if the ORM was really "filling my objects". Unfortunately, most of the time our objects will not be compatible with the ORM, and we will need to create an ORM specific class and at some point copy/convert all the data from our main object to that ORM-specific object or vice-versa. So, how much work did the ORM saved us, if any?

Wouldn't it be better to just write our Insert methods and the like using the app specific classes and do the query by hand, doing any conversions there, but never creating an "ORM specific type"?

When Frameworks Work - But Aren't Used Correctly

Talking about planning, sometimes the issue isn't the fact that the framework doesn't work as it should. It's that people just assume it doesn't, or don't even understand the problem they are trying to solve or the capabilities of the framework(s) they are using, and just keep "bad patterns" on how to use a framework.

I see this in a lot of projects that use JSON. JSON is used both as a communication format and as a database format and I already saw situations where there is a "database" JSON class and a "communication" JSON class, which are identical in most cases, but there is a pattern where we need both and copy an object of one type to the other for the "seldom cases" they might have a difference. In practice, most differences happen only when developers forget to update the "other half" of the class they are working on.

Anyway, the important thing is that most common JSON serializers in .NET allow us to provide our own data-type converters. That is, they can serialize things differently for the database and for the communication by just a different argument during the serialization call (or during the instantiation of the serializer). It also means they can work with semantic types if we provide the right converters which, ideally, would be centralized in a single place.

So, we could avoid all the trouble by just using application specific serialization classes or methods, that would have the right converters in place. Unfortunately, either because people don't understand the JSON frameworks and their type conversion rules, most "solutions" just use more and more types, where each one follow different rules, and the pattern is to copy things from one object to another (including manually copying sub-objects) just to create a new set of objects that follow what was seen as the framework rules.

In this particular case, there is nothing bad with the framework. It's the total lack of planning and/or understanding of the framework that's causing the issue, possibly creating one of the following patterns:

  • Use attributes every time a property of a specific type is used, even though that could be configured only once
  • Have to create a Json-serializable type and convert from an app specific type to a Json-serializable object (or vice-versa) on all calls, even though the app specific type was perfectly serializable by the Json framework with the right converters
  • Just limit property types to primitive types, strings or arrays of those as we assume the framework cannot deal with anything else, having to manually validate and convert types where the expected ones don't match, although the framework was totally capable of doing the right thing to begin with

So, if you are converting your Guids to strings (or byte[]), have similar classes (like "json conforming" and "app specific" ones), and are manually doing conversions before serializing or after deserializing, or anything similar... it's probably better to rethink the entire problem.

One or two bad cases like that is not really an issue, but when 90% of the code is doing that kind of stuff, or when the project has Pull Request after Pull Request just to add an attribute to a property, which is of a well-known type that needs the attribute, it is probably time to accept that the original planning is incorrect.

Conclusion

I don't have a real conclusion here, but I hope the article helps people see common mispractices when using or chosing frameworks to use, and I also hope identifying those mispractices early leads to having better planned and directed projects in general.

History

  • 2nd August, 2022: Initial version

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior) Microsoft
United States United States
I started to program computers when I was 11 years old, as a hobbyist, programming in AMOS Basic and Blitz Basic for Amiga.
At 12 I had my first try with assembler, but it was too difficult at the time. Then, in the same year, I learned C and, after learning C, I was finally able to learn assembler (for Motorola 680x0).
Not sure, but probably between 12 and 13, I started to learn C++. I always programmed "in an object oriented way", but using function pointers instead of virtual methods.

At 15 I started to learn Pascal at school and to use Delphi. At 16 I started my first internship (using Delphi). At 18 I started to work professionally using C++ and since then I've developed my programming skills as a professional developer in C++ and C#, generally creating libraries that help other developers do their work easier, faster and with less errors.

Want more info or simply want to contact me?
Take a look at: http://paulozemek.azurewebsites.net/
Or e-mail me at: paulozemek@outlook.com

Codeproject MVP 2012, 2015 & 2016
Microsoft MVP 2013-2014 (in October 2014 I started working at Microsoft, so I can't be a Microsoft MVP anymore).

Comments and Discussions

 
GeneralMy vote of 5 Pin
jmaida10-Aug-22 15:46
jmaida10-Aug-22 15:46 
GeneralRe: My vote of 5 Pin
Paulo Zemek10-Aug-22 15:47
Paulo Zemek10-Aug-22 15:47 
GeneralRe: My vote of 5 Pin
jmaida10-Aug-22 16:48
jmaida10-Aug-22 16:48 
QuestionI do agree Pin
Mario Cosmi2-Aug-22 23:29
Mario Cosmi2-Aug-22 23:29 
AnswerRe: I do agree Pin
Paulo Zemek2-Aug-22 23:48
Paulo Zemek2-Aug-22 23:48 
QuestionMisleading terminology Pin
Greg Utas2-Aug-22 17:08
professionalGreg Utas2-Aug-22 17:08 
AnswerRe: Misleading terminology Pin
Paulo Zemek2-Aug-22 17:36
Paulo Zemek2-Aug-22 17:36 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.