Updated: May 13, 2024
I hate buzzwords. With a passion. Buzzwords are a hallmark of incompetence, superficiality, lack of personality, and then some. They are what distinguishes a skilled technologist from a fluffer. They are also a great way to identify people, with minimal mental effort. You come to your workplace and some fake-smile human drone starts spewing words d'jour, quoting this or that article they read and got inspired by on the business social media the day before. You instantly know, all right, let's avoid this person. If, then, algorithm!
Which brings me to algorithms. Recently, there has been a great deal of Internet chatter over the AI revolution, disruption and similar nonsense. A bunch of companies have developed sophisticated tools, you can interact with these tools using "natural" language, and all of a sudden, it's Artificial Intelligence (AI) everywhere, everything. Just like the fads of touch and smart this that a decade back. Boring. Worst of all, there's this whole talk about how AI will disrupt the modern workplace. Specifically, how AI can or should replace human workers. Sure, except those ought to be clueless middle-level managers, and not ordinary grunts. Let's elaborate.
The next revolution or whatever
The reason I harbor so much snark and disdain for this fad-ology is because like pretty much any such talk, it's enshrined in cutthroat corporatism. The idea behind technologies isn't to make lives better, it's to fatten the bottom line, no matter what. Naturally, the wet dream of every MBA-happy corpo "leader" is to reduce the costs, and what better way than to get rid of unnecessary people!
AI waltzes in like a knight on a white horse, promising deliverance to all and every problem. Easy peasy, worker squezy. On its own, the decision to automate wouldn't be that bad - after all, humanity has used technology to replace tons of jobs in the past two hundred yards, effectively killing the man-powered assembly line. The problem starts with people who have zero understanding of technology preaching its usefulness with the conviction of blockbuster prophets.
This got me thinking. OK, let's be open-minded. Can AI truly replace human employees in the modern corporate office? The instant answer is, perhaps. Everything is doable. With enough money and effort, you can do pretty much anything. The question is not whether you could, it's whether you should.
All right, the next question is then, who ought to be replaced by machines and algorithms? Human resources? Lawyers? Payroll? Software developers? Managers perhaps? Ah. In all of these vibrant online brainstorm articles, the focus is always on the peasants, never on the hallowed stratum of management.
The idea is that AI can predict and replicate a lot of menial, repetitive actions done by human workers today, saving time and cost and reducing errors. For example, can AI do the customer support chat functionality well enough? Can AI code safely and efficiently enough? Can AI account for the randomness of human behavior? Can AI manage teams?
My simple answer to all these is: if the factor of randomness is greater than the factor of predictable behavior, then no, AI will not be able to replace those jobs (and people doing them). The ratio between false positives, false negatives and true positives will not be favorable enough to justify the investment and use of such blackbox solutions. On the other hand, if the jobs are predictable, then you can use AI instead.
And here comes my bold claim: AI can easily replace bad managers. It can probably even do better (or less worse a job) than most humans in this regard.
Can machines replicate human behavior?
Humans are living and breathing paradoxes. We are creatures of habit, and yet, we often find ourselves testing the limits of our convictions, our abilities, consciously and unconsciously. We like to contradict ourselves. As a group, our idiosyncrasies even out, and there's good, dependable herd mentality. On an individual level, we're unpredictable.
Think about it, even if you're 40 or 50 or 60 years old, i.e., you have decades of experience in human interaction, you still don't always know what other people think, you often get it wrong, you assume other people do things based on YOUR beliefs and experience, and we constantly have to adjust our behavior, based on thousands of little inputs around us, all the time. A tiny clue here, a word there, a smile, a nod, a joke.
Some people are reasonably good at "reading" others, most aren't. By and large, we get by, following the society's norm, and making a million little decisions and choices, every single day. We try to contextualize our experience, we rely on unreliable memories of our past actions, and we draw great comfort from patterns and the expectation that whatever happened will happen again the way we know it and like it.
By and large, this makes machines an unlikely substitute for our day-to-day interaction. We don't know what we will do ourselves in any one situation, therefore, there's no machine that can predict undecided behavior. At best, you have probabilistic luck.
To wit, I claim that any job that requires human interaction is a bad choice for AI. Perhaps very expensive AI could do it, and much later in the future, but that's not the point, is it. We want machines and algorithms to be cheaper than humans, and we want it...yesterday. Therefore, sales, customer support, doctors, anything of that ilk should not be considered for the great algorithmic cull of the century. Except...
What if a human is doing their job really badly? What if their human interaction sucks?
Ah!
Enter bad managers. Useless, counterproductive, even straight out detrimental. Perhaps algorithms can do less damage? Bingo.
Over the past two decades, I had been (un)lucky to work with roughly 20 managers in my career. Most of them were mediocre or outright bad. Very, very few were decent leaders. Overall, roughly 70% of my managers should not have been managers - or anyone's managers. My work, the work of the teams I was part of, our general morale and success, all of these would have been much higher if these managers had not existed and had not interfered in our work with their pointless posturing, ego games, and buzzwords. P.S. This is a juicy topic that I'm going to discuss at length in my upcoming book How to Make Your Career Suck Less. Right now, though, let's focus on AI.
Now, we can all agree that if you can remove bad managers from the equation, everyone wins. Great. The only question is, can you replace such people with a "wisdom" box that can manage teams with more success? The simple answer is, if there's a basic set of algorithms that can be applied to the equation, then yes, we can talk about an artificial solution. If there's nothing in common, if the behavior is entirely random, then, alas no.
Luckily for us, luckily for humanity and AI, bad managers are highly predictable!
The hallmark of the mediocre boss
My experience is, well, that of one man, but it comes with a lot of data points. Twenty managers is enough for the whole 95% confidence thingie. I know, I know. So, here's the short list of behavioral elements and clues that indicate bad management mindset:
- Micromanagement attitude - Meddling nonstop.
- Control freak - Trying to assert control or override other people's ideas, behaviors.
- Use of buzzwords. Some examples include Agile, DevOps, IoT, cloud, transformation, and lately blockchain, AI, ML, AI/ML, and whatever's trending that week. If you hear the phrases brown bag session, rightsize, low-hanging fruit, upskill or upgradation [sic], it's a 100% hit.
- They are a non-native English speaker, but they use strong American (Silicon Valley) lingo, like grandfather, sunset, double down, bring forward, circle back, put a pause on, and similar.
- They are a native English speaker, and they use American sports terminology (baseball and football) with non-Americans.
- They assume everything happens in one timezone (theirs).
- They give their phone number without a country code.
- They have a (Linkedin) profile photo that shows them smiling, arms crossed across the chest, head titled sideways and down while looking up at the camera, wearing a business-sensible or a business-casual attire, like a buttoned T-shirt (but no tie as this is too formal).
- They aren't capable of (active) listening - very easy to spot, you talk to them, and they simply play their own internal tape without any regard to external inputs; consequently, it's impossible to have any sort of discussion, let alone argument, with such people as the probability of changing their mind is below zero.
- They place great focus on visible acts of workplace heroism - late hours, overtime.
- They adore "crisis management" - so-called "tiger team" sessions, "all hands on deck" situations, and such.
- They have no idea what the team is doing and/or do not understand the work/projects.
- They want a meeting for something that takes 14 seconds by email.
- They believe a chat program status (green, red, busy, etc) is important.
- Their inbox is always full of unread emails, or emails they looked at and then didn't do anything with.
- They use a preview pane in a classic email client.
There. I just algorithmicized 99% of bad managers worldwide. And this is where AI can come in handy. One, we can use AI to collect these signals from the company's communication systems. As a first step, this helps the company figure out their "bad apples". The next step is to emulate them with software.
We can even call this process Lumberghization Learning Module (LLM) - named after Bill Lumbergh from Office Space, the finest parody (or is it) manifesto on corporate stupidity ever made.
AI to the rescue
I mean, think about it. You come in to work, you haven't had your coffee yet, and the puff-me-feathers manager wants to annoy you with something pointless - something they could easily summarize in an email, and ask you to look at in your own time. There's no reason for a human to do this! You can have AI randomly choose topics and blast frivolous questions at different intervals between 08:00 and 09:15.
Then, through the day, the machine can ask for "status updates" for things you already sent an email about two days back, but which the manager didn't read, because they have 4,330 unread emails in their inbox, and they are of course way too busy with meetings to have enough spare time to read important information from their team.
The AI can use passive-aggressive snark whenever you take a break that's three minutes too long. Phrases like: "Someone's on vacation, I see" or "Did you enjoy the sun?" or "I didn't know you've taken up smoking". The possibilities to annoy people are limitless.
During the annual review, the AI can randomly give you a score - no worse than your clueless manager. It will quote something "naughty" you did just the week before the review as an indicator of a year-long bad behavior on your behalf, and the reason, of course, why you must be given the center of Bell Curve rating and no raise.
If you complain, the AI can proverbially nod and then repeat itself, dismissing everything you just said.
The AI can "walk" around the office space and spew sentences that include phrases like "action item", "top priority", "stakeholders", "low hanging fruit", and alike.
The AI can start chat program messages with "Hi", "Hello" or "Are you there?", and when you have to stop your important work to reply, not bother with a followup until about 10-15 minutes before the end of the day (or preferably, after).
Then, as a hallmark of true machine learning, the AI can mix all these different behaviors and come up with its own ways to disrupt work and kill morale. Corporate Tourette's! For example, send random meeting invites, send links to outdated documents, CC people into irrelevant email chains (with mandatory looping in), drop the odd buzzword in unrelated situations, ignore data, and many other wonderful things.
There you go. The golden formula. No need to thank me for saving the future workplace.
Conclusion
The ideal moneysaver scenario is to simply get rid of bad managers. Period. I'm pretty sure most teams will survive well enough on their own, without being bothered by corporate human drones. In the worse case, they will do just as equally badly as if they were managed by a clueless chump. Most likely, they will cause far less damage. After all, if a workplace wants to be leet, they can use AI to cause damage. Profit!
On a serious note, if you ask me, AI will have its uses. But mostly in non-human-facing functions. Anything that relies on random human interaction, nope. For that matter, AI is a much better candidate for replacing software development than any "soft" skill in the office - not that most soft-skill jobs are needed to begin with. But hey. We're a long way from true general AI. Until then, we can play silly games.
I think there's no harm in trying to use AI as a replacement for bad management. What's the worst that can happen? A project will run way beyond budget and schedule, full of problems, people will quit, and there will be bugs discovered in the production environment necessitating scramble-everyone all-hands-on-deck nonsense posturing? Oh, that's already happening with humans. So why not, give machines a chance. Maybe they will learn how to be us.
Cheers.