Do you trust the hammer? How a simple question led me to a deep philosophy about the dual nature of software, the invention of which spills over into all economic products today.
I have a psychologist friend with whom I like to discuss the topics of the digital insider bubble, because her completely different perspective on those topics provides interesting feedback. When we were once discussing social media and my gradual withdrawal from all but Twitter, she asked if that meant I “trusted” Twitter. The answer was a tentative “yes” after a moment’s hesitation, but the question stuck in my head because, for some obscure reason, it seemed odd to me. I needed to shed some light on that reason.
After much introspection, I realized that my “trust” in Twitter was based primarily on the fact that no trust was needed. To me, Twitter is a tool for managing the flow of public information, the internal working logic of which is simple, unambiguous and relatively unchanging (let’s ignore that an alternative client is needed for this and other details for now). Through Twitter, trust is placed in the sources and transmitters of information on it, not in its own algorithms. The rules of the game are clearly given and it is up to you to judge whether or how you can use them to your advantage.
The question whether I “trust Twitter” therefore has a similar meaning to the question whether I trust a hammer. Does not make sense. Why shouldn’t I trust the hammer? But how is it possible that this is not obvious at first glance? How is it possible that asking this question subconsciously makes sense, while asking about trust in the hammer is obvious nonsense? That’s because, unlike computers, it’s crystal clear that a hammer can’t refuse to hammer the nails of competing companies.
It is a self-evident feature of tools that they do not have a will of their own. They are neutral conduits of the will of whoever uses them. This may seem like a tritely self-evident statement, but it has extremely profound consequences. Tools become part of those who use them. Both figuratively and quite literally.
In contrast, software is the executor of the will of its author through a tool used by someone else. He must believe that the author of the software does not pursue his own interests at his expense without being able to verify it. Before computers, only “planned obsolescence” could be smuggled into a product by its manufacturer against the will of its user, and even that was resolved legislatively after the really big case was revealed. Computers have opened up a whole new universe in this direction, and my earlier expectation that they will dictate the order of the same standards as before has not been fulfilled. We go practically without rules, and the only consolation is that working with trust between sovereign entities has been practiced by humanity for thousands of years. It’s called politics.
Using software that requires trust is not the act of using a tool. It is an act of political alliance. Instead of the tool becoming an exclusive part of our identity, we surrender part of our sovereignty in favor of a collaborative contract. Software with which we are not sure of its unconditional loyalty cannot become a part of us, in the same way that we do not consider our hand to be our own, if it moves from time to time according to someone else’s thoughts. This terrifying notion is our everyday reality with computers.
For example, it is the core of the reason why I still use almost no messenger. When I say I’m too lazy to choose one, people don’t really understand and quietly attribute it to my paranoia. What could be difficult about that? Install from playstore? UX for semi-demented children?
The really difficult thing is to evaluate who and on what basis deserves so much trust that it makes sense to conclude a long-term political agreement with him. Private commercial firm or state-run communication infrastructure? Is end-to-end encryption a guarantee of will neutrality even on a centralized communication architecture? Or distributed merkle tree? Who is Moxie Marlinspike as a person? Is Russian origin a guarantee of greater prudence towards manipulation or, on the contrary, greater practice in manipulation? Does the product’s business plan guarantee its [long-term existence][gcemeters] and preserving will neutrality in the future? So many questions…
Although I’m capable of making such a strategic-political decision when it’s unavoidable (e.g. OS choice), I’m awfully happy when I can use tools on my computer that are real tools. Which do exactly what I tell them to, according to clearly laid out rules, and will do so forever and ever until I decide otherwise myself. Like my dentist, who even a few years ago had a patient database in Windows 3.11 on 486. Or my friend’s dad, who still has company accounting in MS-DOS, which even after almost 30 years can adapt to current legislative standards. Those products are old and clunky after so many years. But this also applies to our hands. It is a toll for one’s own consistent identity in this world.
Unfortunately, the trend goes in exactly the opposite direction mainly through SaaS and, to my horror, it is starting to spill back into the world of physical things as well. E-books that get deleted remotely due to copyright lawsuit, printers that stop working with non-genuine ink cartridges after an update. Tractors, which do not even accept purely mechanical spare parts without an authorized digital signature. Therefore, I intend to run for the president of software engineers with the slogan:
Make computers tools again! (Not the other way round)
To turn computers into tools that we can reach for as mindlessly as a hammer. Because otherwise, in the next century, we may say goodbye to the very concept of personal identity.