and has 0 comments

Intro

When learning to code we get to these exercises and tests and katas and interview questions using some array and expecting some magical string or number and you hear they are called algorithms. And they are intellectual, complex, mathematical, abstract, annoying and feel completely random. But when you are actually doing something real, code doesn't look like that at all. It took me years to understand what the problem is and I am going to share that with you today.

The short version is this: If your program logic doesn't look like an algorithm you are probably doing something wrong. Programming katas are simple because they need to be able to check your answers and give an unequivocal result. It's good to know them, but you shouldn't need to know them, because they are not meant for the real world, but for controlled short term experiments. Unless you are going to work for a sorting company, that's a thing.

Now for the long version.

What you expected versus what you get

You get your first job as a developer and your tasks sound like "fix the color of the submit button" and "the report page shows title in the right, move it to the left". And you think "why the hell did I go through those manual Bubble sort algorithms and learned Quicksort partitions if this is what programming looks like?!". The answer is that you will get to a point where your skills will make people feel confident enough to let you design and architect the things you write. Only then the algorithmic thinking will help because you will have decided yourself what the button does and why its color or position are what they are.

When you start designing flows and entire systems and how they click together it helps a lot to see a component as an algorithm: inputs, rules and outputs. "But, Siderite, a button is not either of those!" you will say. And that is true, but also completely irrelevant. Your program logic should not care about a button, but about an input. And now you also see why the summing of distinct array items is a poor substitute for real life problems, because a click on a button is not a value in a properly contained list, but an event. And most programming exercises and even entire computer classes don't treat events as abstract inputs at all.

Lately this has started to change, both in how programming languages look at actions and events as first-class citizens, but also in theoretical and programmatic concepts like observables, streams, functional programming, reactivity, event buses and messaging, microservices, etc. It makes sense to not quite get it when you have not begun to touch these concepts and when everybody and their grandmother focus on the latest frontend framework, rapid application dev tools or extensions to VS Code, but at their very core all of these things are solutions to the same problem, following the same principles.

Breaking reality apart

As you start to climb toward seniority (and that does NOT mean going to Mexico so they call you "señor developer") you learn about Separation of Concerns, as a good strategy to isolate changes, improve readability and testing and ease maintainability and deployment. You learn about writing applications in layers: the UI, the business logic, the database access, etc, which is also about separating concerns. And as you go further and further on that path you realize...

Wait! This business logic thing looks like an algorithm! It abstracts all of its dependencies until all that remains is: inputs, rules, outputs.

But there are things to confound you: events, user input, parallel tasks, race conditions, heavy load use, the cloud. You can use the same tools, though! Abstract everything, separate concerns. What is an event but a signal coming from a source? Your input is the observable source object and the events themselves just values coming in. Or just a method that receives an event object and you handle sending the event someplace else. Everything coming from the user can be handled the same way. Concurrency is solved by maintaining as little internal state as possible and, when absolutely necessary, guarding it against concurrent access via clear established methods, like semaphores and transaction contexts.

Once your logic is clear, your data structured and every external dependency abstracted away, you can run and test every subsystem in isolation. You don't care something is supposed to be a click, or an error, or a network message or on Windows or Linux or how it's deployed or if the database is available and what kind it is, what UI is being used and what it does, where in the world you are and what time it is and so on. Your code is now an algorithm: a set of rules applied on predictable input which can then be tested for an expected output.

A new requirement comes: you change just the part responsible for the requirement. You can write unit tests before or after or test it manually without caring about anything outside that piece of code. A bug is reported: you write a test that reproduces the bug, you change the code, see the test pass and you never had to open a browser or an app or go to some external environment or ask some other team for user access or if you can use the database. How does it sound to be able to code without ever having to manually go through application scenarios?

Of course there will be an ugly user facing piece of code that you will have to write, but it should be minimal. Your logic is sound, almost mathematically provable to be correct, and how you plug it in is irrelevant. Yes, you will have to work with the graphical designer in your team and make it so the nicely colored card slides across the screen, but that is a meaningless process that you play with in complete isolation from your logic. End to end testing is sometimes necessary, but it's a human thing to do, as well. Just check the "feel" of things, how they look, how they move, if it works for you. The only reason why you are going through it is because you have not been able to completely abstract the end user, with their stupid requests and complicated needs and ideas of what beautiful means.

Yet that is beginning to change as well. Artificial Intelligence, of all things, has advanced so far that you can create minimal interfaces using human language requests. "Build me a web page with a list of items that can be scrolled and selected to be displayed in a details pane on the right". I can imagine this can be used in real life only when the logic of the application has already been written and one is able to just plug and play such a monstrosity without much effort, while also being prepared to change the requirements, recreate the entire things in a different way, but plug it in the same.

And there will be some sort of deployment framework, with people deploying stuff and checking stuff, with data in databases or other persistence mediums. Your code logic? Doesn't care.

Imposter syndrome

Does this sound like a pipe dream that a snake oil peddler is trying to sell you? Let me tell you that the only reason you are not working like that now is because someone though it was too complicated and decided to cut corners. And they have been paying for it ever since, as well as you.

The only proven way of solving complex problems is Divide and Rule. Life is complex and real problems, too. Separation of Concerns, Inversion of Control, Domain Boundaries are the tools you use to break any problem into smaller manageable pieces. And that brings us back to interview questions and pointless algorithms.

When you go to a code test, you are the algorithm. They give you some input and an expected output and check to see if your internal rules are up to the task. Of course you could google for an easy solution. More than that, what kind of employee would you be if whenever the boss asked for something you would build it from scratch without seeing what others did? What hubris to believe that you could know the answer better than anyone else without even checking!

Test succeeded

The conclusion of this stream (heh!) of consciousness is that once you realize the algorithmic nature of any problem (once you abstract every interface with reality), you can see the actual value of being proficient in writing one. You might start with sorting and fizzbuzz and other bullcrap like that, but they are just steps on a larger ladder that will eventually make sense, just like learning the letters of the alphabet prepared you to read to the end of this post. Also, if you are trying to get a job as a book editor and the HR person is asking you if you know all the letters of the alphabet, maybe you don't want to work there.

P.S.

The links in this article are important, especially if you are a just beginning your journey as a developer. Check out the concepts there and learn to use them in your life, it will get a whole lot easier!

Comments

Be the first to post a comment

Post a comment