When 'Considered Harmful' is Actually Helpful

When 'Considered Harmful' is Actually Helpful

Edsger W. Dijkstra's 1968 essay still provokes thought with its critique on coding practices, sparking ongoing debates about what constitutes 'harmful' today.

KC Fairlight

KC Fairlight

Who would have thought that an essay published back in 1968 would still be sparking debates today? Edsger W. Dijkstra, a renowned computer scientist, stirred up quite the commotion with his paper titled 'Go To Statement Considered Harmful.' Released at the dawn of digital computing, this paper addressed the perplexing use of 'go to' statements in programming languages. If you’re not familiar with it, 'go to' allowed programmers to jump from one point in the program to another, creating chaos in code readability and logic. Dijkstra argued that this practice led to spaghetti code, making it nearly impossible to debug or modify. So, where did this leave the programming community? In a state of contemplation, innovation, and eventually, evolution toward structured programming.

But what exactly does it mean for something to be 'considered harmful'? The catchphrase has stuck, popping up time and again in discussions on various practices and technologies across fields. To be considered harmful doesn't necessarily mean it's a guaranteed recipe for disaster. Often, it's a way to provoke thought and introspection about existing norms. It challenges us to look beyond the surface and question whether what we practice holds up to scrutiny.

Since the inception of this phrase, numerous technology writers and critics have borrowed it as a framework for presenting counter-arguments or critiques. For instance, in the tech world, you might come across titles like 'JavaScript Frameworks Considered Harmful' or 'Singletons Considered Harmful.' These modern takes echo Dijkstra’s bold stance, encouraging engineers to analyze tools and methodologies more critically.

Yet, it’s important to remember that not all practices are born equal. Context matters. Imagine debugging without 'go to' in an era when programming paradigms were still budding. Critics at the time might have had a valid point defending its utility. Likewise, JavaScript frameworks solve many problems but may not be the one-size-fits-all solution. So, it's essential to look at what we gain versus what we give up when labeling something as harmful.

In embracing fresh ideas, we shouldn’t ignore the age-old mechanisms that got us here. Innovation often thrives amid tension between what's tried and what's new. Dijkstra’s essay helped elevate structured programming into the mainstream. However, it would be reductive to dismiss older practices entirely, especially since many ideas are continuously being iterapped and improved.

Nobody would dispute that programming has evolved drastically since the late ‘60s. With every new generation of developers, new patterns emerge, but also new 'considered harmful' debates arise. For instance, artificial intelligence and machine learning bloom with promise. Yet concerns about ethics, data privacy, and potential biases in algorithms also grow. Sometimes, these critiques use the 'considered harmful' lens to highlight possible downfalls and drive better, more ethical solutions.

This balance between skepticism and acceptance resonates far beyond coding. Think about how many societal practices can be scrutinized under this lens, provoking discussions about established social structures. For instance, consider the gig economy, hailed for offering flexibility and independence. Conversely, it’s criticized for its inconsistent earning potential and lack of long-term security, raising questions about the future of work.

These tensions can sound familiar in conversations about environmental issues too—as we shift toward renewable energy, old habits are critiqued for their ecological footprint. While these debates can seem daunting, they can inspire us to craft policies and technologies that respect our planet.

We shouldn't shy away from asking tough questions or engaging in conversations that make us uncomfortable. Whether we’re discerning the value of a 'go to' in coding or assessing the long-term implications of a given technology or practice, this notion urges us to dig deeper into the 'why' and 'how' of things.

By using 'considered harmful,' do we risk being dismissive? Sure, the phrase can be polarizing and even inadvertently close off important discussions before they start. However, when wielded with the intent of fostering reflection rather than judgment, it can encourage dialogues that propel us toward more informed decisions.

If you feel that a certain practice should be tagged as harmful, question yourself constantly: is this perspective providing more clarity, or simply adding layers to the confusion? Asserting something as harmful shouldn't be a call to arms but rather a rallying cry toward understanding and, hopefully, improvement.

By holding space for critique in a way that is considerate and open to counterpoints, we shape not only the technologies and norms we work with but also our approach to progress itself. This discourse allows varied voices and ideas to contribute to how we, as a society, grow. So, the next time you see 'considered harmful' attached to an idea or practice, let it pique your curiosity. Dive in, explore the layers beneath, question boldly, and enjoy the journey that emerges from critical examination.