20

Are there, in the canonical Gang of Four list, any design patterns that you often find misused, misunderstood or overused (other than the highly debated Singleton)? In other words, is there a design pattern you would advise to think twice before using? (And why?)

ethyreal
  • 3,659
  • 2
  • 21
  • 25
Federico A. Ramponi
  • 46,145
  • 29
  • 109
  • 133
  • I don't think you should discourage the use of any of them (if they fit the situation). These patterns are designed to solve common problems, you need to inform those implementing them of the problems before you ever inform of discourage the use of the patterns that solve the problems. – Chris Missal Aug 12 '09 at 05:00

15 Answers15

17

Factory Patterns...

I was parachuted into a project before where every single MyObject in the system had an equivalent MyObjectFactory for generating new instances. There was no concept of abstraction or extended classes... just plain old ClassX & ClassXFactory.

And no-one could explain why... "It was just the way things had always been done"

Eoin Campbell
  • 43,500
  • 17
  • 101
  • 157
  • Sometimes this has to be done in order to facilitate testing of bad code. With the factories, your test code can mock out any factories and replace them with their own factories--but if you are going to do it to that extend, why not use DI? – Bill K Sep 26 '09 at 03:40
  • Exactly. Statically linking to a factory isn't much better than statically linking via the new operator (except there's at least some possibility of object reuse). – kyoryu Sep 26 '09 at 04:07
17

The singleton pattern .. global state often leads to problems when testing

Any code depending on the singleton gets harder and harder to test because that dependency isn't easily mocked..

Tigraine
  • 23,358
  • 11
  • 65
  • 110
  • 3
    Not only testing they are also problematic when doing concurrency or in an application where you have to reset the application state completely (such as games). – Jasper Bekkers Oct 30 '08 at 21:57
  • Everyone is always Hating on the singleton pattern. I'm surprised this isn't top of the list. – Omar Kooheji Oct 30 '08 at 23:12
  • I often ask people that want to use singletons if they'd consider just using a static class instead. If they say no, I ask them how using a singleton is, practically, any different. Singletons should be about instantiation control, not global access. – kyoryu Sep 26 '09 at 04:08
  • One reason to use singletons over static classes is when you need to have a real object, and not just a pile of functions. A singleton object can derive from another class or implement an interface. In most commonly used languages, a static class can't. Of course, that doesn't fix the problems with Singleton, and you should really think twice before using it. FWIW, I only use singletons when they are completely stateless. Sure, you could instantiate more than one of these objects, but they would all do the same thing. – Daniel Yankowsky Dec 14 '09 at 15:22
10

The only one (besides the aforementioned Singleton and its partner in crime, the Factory) wouldn't be a GoF, it would be setters and getters when applied to an object's native properties.

Setters and getters applied to member variables are functionally identical to public member variables. A getter without a setter is more like a public final member variable--but at that point why not just use a public final member variable, they do no more harm...

The only difference is that you "could" intercept the call and override it, but people rarely do. More often it's used as a crutch for procedural programmers to avoid OO programming (which is the real reason it's an anti-pattern).

With a setter and/or getter you are still exposing your inner member structure to the outside world (for instance, you'll have to refactor other classes if you find you need to change a int to a long) and you are almost assuring that some code that should be inside your object is instead being placed outside.

There are a few exceptions I can think of:

Setters used to simplify an objects construction. Sometimes it's necessary to create an object then set other values in afterwards. These values should be immutable (you shouldn't be able to call set twice) for safety.

Getters used to access contained objects. Since the contained objects are usually able to insure their own integrity, sharing them is great. Setters are generally bad in this case, you don't want an object with a specific state swapped-out right underneath your nose, it makes assuring your own integrity much more difficult.

Java Beans used for screen components: Yeah, can't figure out a better way to implement these "property balls". Reflection comes in handy for this component, the patterns are useful--it's kinda hacky but works.

DAO/DTO Bean objects. Honestly I think these are an iffy usage of the pattern, but they are the pattern. It makes manipulation of the properties via meta-data instead of code much more difficult than it should be since it has to be reflective. The beans properties are always tied to some outside source (database format, data transfer format, component properties, ...) so why are we duplicating the work of defining each part?

Edit: Stolen from kyoryu's comment, brought up to the post because It's really a perfect summary of what I was saying and could be missed in the comments. Needed since not everybody seems to get the concept that adding accessors to the language only codifies a bad OO design pattern:

Short version -

if (account1.balance > 1000)
{
    account1.balance = account1.balance - 1000;
    account2.balance = account2.balance + 1000;
}; = BAD CODE. 

account2.deposit(account1.withdraw(1000)); = GOOD CODE. 

The second one doesn't require accessors... – kyoryu (Slightly modified by bill k because I have a little more room than he did in his comment).

The second one moves the test and some other math inside Account rather than duplicating it throughout the code every place you might make a transfer.

Just to belabor the point EVEN MORE, note that with the "GOOD CODE" style it's pretty obvious that the output of .withdraw could be a Transaction object that contains information about the entire transaction including its success, source and destination and logging ability. How would this have occurred to someone who writes their code in "BAD CODE" style?

Also how would you refactor BAD CODE to even use such an object? It's just a mess.

Bill K
  • 62,186
  • 18
  • 105
  • 157
  • 1
    Getters and setters, just as many other patters, are attempts to correct a limitation of the language. In this case, properties. Many Java coders shrill with fear when seeing a public variable in other languages. They often forget that in *saner* languages, if you need to add logic to the set or get of that variable, you can just replace it with a property of the same name, instead of making a lot of `getId()` and `setId()` *just in case*. (Even VB6 had properties) – Esteban Küber Sep 25 '09 at 02:30
  • 3
    Actually, properties are the problem as much as getters and setters. The idea of OO is that you ask an object to do something for you. Where does a getter or setter fit into that? Getters or read only properties aren't really harmful and are needed sometimes (avoid them unless you need them), but setters/writable properties are terrible. The biggest problem I have with properties is that they encourage this bad behavior. When you absolutely need one they are a nicer syntax, but having properties built into the language encourages people to think they are a good idea. – Bill K Sep 25 '09 at 17:19
  • 2
    Exactly what Bill K said. Short version - if (account.balance > 1000){account.balance = account.balance - 1000;TransfertoMe(1000)}; = BAD CODE. account.Withdraw(1000); = GOOD CODE. The second one doesn't require accessors... – kyoryu Sep 26 '09 at 04:10
5

I would also say the factory pattern. Similar experience as Eoin. In my case the project had tons of factories because some people thought you might have used object A for a local implementation and object B for remote one and it was abstracted via a factory (which is a sensible thing to do).

But the "remote" implementation has never been needed or implemented or even foreseen in the future... and also, less-skilled engineers started adopting the pattern for lots of other things just as a cookie cutter...

Carl Manaster
  • 39,912
  • 17
  • 102
  • 155
el_eduardo
  • 3,138
  • 4
  • 21
  • 17
4

Actually, what I see most often is the lack of use of an appropriate pattern. Typical scenario: me: "Hey, module A already has a piece of code that loops through a set of objects and performs database operation X on them, why didn't you reuse that code?" coder: "well, but I had to do operation Y on those objects." me: "what about using refactoring it to use the Command pattern to execute X or Y as appropriate?"

I once saw usage of the Subject-Observer pattern get out of hand. It was implemented between processes using the database to persistently store the Subject. Because of the sheer number of updates to the subject, and the number of observers, the load on the database was tremendous, and caused an unforeseen system-wide slowdown.

Ogre Psalm33
  • 21,366
  • 16
  • 74
  • 92
3

The Mediator pattern definitely has the potential to be misused if all sorts of logic gets globbed into one huge class.

Anon
  • 11,870
  • 3
  • 23
  • 19
3

ALL.

Don't take me wrong here, I find them to be a great base and when understood well very helpful. It takes so much out of you to acquire the design skills to know well when and how to apply them in the code. Actually, that's the overall case for the skills to create clean code.

Its not about not using then, is exactly what you said in the question "think twice before using". Understand well what you are doing and why. If you don't, talk to someone that can coach you on that - besides reading a lot. Beware of the ones that know all the patterns but can't explain you clearly the what/why of any of them - specially of the one in question.

eglasius
  • 35,831
  • 5
  • 65
  • 110
  • 2
    To me, the biggest thing is that the design patterns are mostly useful if you look at them from a slightly different programming model than 'traditional' pseudo-procedural OO. Code written using design patterns should look an awful lot like code using an Actor model - and if writing code using an Actor-like model, the design patterns become extremely pragmatic and frequently obvious. – kyoryu Sep 26 '09 at 04:14
  • +1 This is why AntiPatterns got introduced. A Design Pattern used in the wrong context becomes an AntiPattern, meaning the drawbacks outweigh the benefits. – helpermethod Aug 01 '10 at 10:09
3

Actually, I would say design patterns in general are overused when a KISS (Keep It Simple, Stupid Keep it Short and Simple) solution is all that's needed.

Design patterns are great for making a system flexible, but at the cost of making the implementation more complex. This is only a worthwhile trade off when the flexibility will provide a practical advantage.

Esteban Küber
  • 36,388
  • 15
  • 79
  • 97
dsimcha
  • 67,514
  • 53
  • 213
  • 334
2

I just wanted to add another comment after seeing some of the "All patterns are bad" answers.

If you are a half decent programmer working on moderately challenging problems, nearly all the patters should have presented themselves to you at one point or another as the "Obvious" solution to a problem.

The only real point of the Design Patterns book was to put names to what we all do every day so we could communicate better.

I suppose if you are a newish programmer, it would be very helpful to read through them so that the day you need one you don't have to figure it out on your own, it's already in your toolbox, but in general--any of these could be figured out by a Gang of One (anyOne!).

If there are any of these you didn't already know, you probably just never needed it.

It is pretty useful to put names to the patters and formalize them a little though, I'm not complaining about the book at all. If you don't see the occasional need for nearly all of the patterns here, you just aren't working on very hard code (or you duplicate code a lot, something I consider THE cardinal sin).

Bill K
  • 62,186
  • 18
  • 105
  • 157
  • Of course you could figure them out by yourself but it would take some time to figure out all the subtleties and when you're faced with a similar problem, you may have to go through this process again. But I generally agree, there is no magic behind all this. If you are a good programmer, and have a decent understanding of OOD, you'd probably used Design Pattern without knowing. – helpermethod Aug 01 '10 at 10:12
2

The big one I see is the singleton pattern where not enough care and dilligence is applied as to how and when a singleton's destructor should be called.

For such a ubiquitous pattern there is hardly any discussion about the proper process to decide when a singleton must die.

Just my 0.02.

cheers,

Rob

Rob Wells
  • 36,220
  • 13
  • 81
  • 146
1

The observer pattern is pretty useless in C# because it has events.

jonnii
  • 28,019
  • 8
  • 80
  • 108
  • 6
    C# events are an implementation of the observer pattern. – kenny Oct 30 '08 at 21:25
  • I don't know events in C#, but surely the distinction between what "is a pattern" and what "is not" is language dependent. Think at the Iterator... – Federico A. Ramponi Oct 30 '08 at 21:28
  • 1
    I guess that even the "routine call" could be thought of as a "design pattern", back in the 1940's... – Federico A. Ramponi Oct 30 '08 at 21:30
  • 2
    I think he means that since c# events are an implementation of the observer pattern, there is no need to 'roll your own.' – Steven Williams Oct 30 '08 at 21:51
  • Odd, I had former colleagues who wanted to role their own C# events. It seems that the events in the language didn't give them deterministic ordering. I think I got -10 intelligence from attending that meeting. – Dan Blair Oct 30 '08 at 23:04
  • The observer pattern is used in C# you just don't have to code it yourself. Events are an implementation of the observer. – Omar Kooheji Oct 30 '08 at 23:13
  • That's exactly my point, you don't have to roll it yourself as they have a language construct for it. The same is true for the iterator pattern with IEnumerable. – jonnii Oct 31 '08 at 14:25
  • This is true to every language. For example, you don't need getters and setters when you have properties. All design patterns are just attempts to *fix* the language to do something that the language doesn't allow easily. – Esteban Küber Sep 25 '09 at 02:41
  • I disagree with what appears to be an underlying assumption here. Design Patterns (in OO design) are not about individual objects, but about interactions between objects. The observer pattern occurs when one object registers with and listens to another object. To say that C# has no need for such an interaction is wrong. Design patterns don't go away when a language has direct support for them - they just become easier to express. – Daniel Yankowsky Dec 14 '09 at 15:15
  • 1
    The question specifically mentions design patterns in the GoF book, which I took to mean implementing them as described in the book. If you were to implement an observer as the book describes it would be redundant because C# has events. I'm not suggesting that you don't need to observe objects in C#, because that would be ridiculous. I also stand by my stance that a design pattern is different from a langauge feature and the two should not be confused. – jonnii Dec 14 '09 at 17:22
0

First, "it depends" on the language - some structures in some languages lessen the need for certain design patterns.

Second, part of the template for the concept of a Design Pattern from the start has included sections for "Applicability" and "Consequences" - ignore these at your own risk. "Knowing" a pattern doesn't just mean you know how to code it in the language of your choice - it also means knowing when to use it, and what drawbacks using it may entail.

Nate
  • 16,748
  • 5
  • 45
  • 59
0

You can't have straight answer for this question. It's mostly subjective and depends on application requirement.

  1. Most of the people quoted that Singleton_pattern is bad but it's not bad for every user and project. For my project requirement, it serves the purpose. I need one ConnectionManager to handle session management between Client and Application and Singleton does the job brilliantly.

  2. You have given a third party jar with good documentation. The jar contains inheritance hierarchy. Now you have to add a operation on each child. Since you don't have source code, you can't do it. Now you can benefit by using Visitor_pattern. But if you have source code with you, you may not use Visitor pattern at all. You can simple add new operation in parent and each child. Lack of source code does not imply that I am misusingVisitor pattern.

  3. I want to limit communication between various objects. I will go-ahead with implement Mediator_pattern for object communication. I want to limit the client exposure to my system to hide the complexity. I will go-ahead with Facade_pattern. That does not mean that I am Misusing these patterns. This same example can be extended to other patterns like Proxy_pattern etc.

Ravindra babu
  • 37,698
  • 11
  • 250
  • 211
0

Only use a pattern when its called for. You can't predict the future, so while you might put a pattern in to make the design flexible, what happens when the product takes a different direction and your pattern becomes the very thing that's keeping you from implementing what your users want?

Keep designs as simple as possible to start with. When you know more about how your design needs to change, use the appropriate pattern and not before.

Carl
  • 43,122
  • 10
  • 80
  • 104
0

REPOSITORY PATTERN

Most people start using this pattern right after reading the Domain Driven Design book by Eric Evans.

How many folks here have seen repositories constructed like data access objects?

Srikar Doddi
  • 15,499
  • 15
  • 65
  • 106