We keep making the same mistakes with spreadsheets, despite bad consequences | Ars Technica

Spreadsheets represent unknown risks in the form of errors, privacy violations, trade secrets, and compliance violations. Yet they are also critical for the way many organizations make their decisions. For this reason, they have been described by experts as the “dark matter” of corporate IT.

Source: We keep making the same mistakes with spreadsheets, despite bad consequences | Ars Technica

As I often say, making real applications out of these Frankenstein monsters of data has been my bread and butter throughout my career. The function that the central IT departments in blue chip manufacturing companies could never quite wrap its arms around keeps getting bigger and bigger, and making larger and larger gaps to fill by people in the trenches. So… too right, mate, and keep it up.

Why do people use VBA?

Why do people use VBA? In order to answer this question, we must first look at another question – who actually uses VBA in the first place? In 2021 I ran a poll on /r/vba where I asked redditors why they code in VBA. From these data, we can clearly see that the majority of people who use VBA do so mainly because they have no other choice. Many organisations run their entire business processes with Excel, and when a little bit of automation is required VBA is usually #1 on the list.

Source: Why do people use VBA?

I was just ranting about this to my kids a couple days ago.

Even in large companies, with massive IT departments, and lots and lots of internal databases and information systems, US businesses are still run on Excel. That’s not subjective. I’ve worked for decades inside three Fortune 250’s (and a couple smaller shops), and bad Excel “applications” are in use at all of them. And after one person learns enough VBA to get a spreadsheet dealing with a particular issue to save a little time for themselves, they start sharing them with their colleagues, and the problem gets worse. Half of my career has been built on making “real” applications out of Excel spreadsheets that were wobbling under their own weight.

But why?

Back in the old days, IT grew out of the accounting department. They had the only computer in the building, and it was an IBM mainframe. Great stuff, right? Saved a lot of time and paperwork, right? Except that it didn’t. It quickly ossified the company’s work flow, and permanently hobbled its ability to adapt to change. It would take years to get any changes made in the mainframe group, and people were frustrated. Along came spreadsheets, and everything changed.

I saw it myself in my first engineering job in 1993. We got new computers with Windows 3.1 and Quattro Pro. (And AutoCAD. And, of course, on mine: DOOM!) After weeks of bugging the lady who ran the mainframe — who apparently had to write a whole program — I got her to dump the BOM for a couple of our products to compare for similarities. I downloaded the 2 files to my PC with a token ring mainframe interface card. I think they were only about 1MB each. With 8MB of RAM, I had twice as much memory as our System 36, and I could open both BOM’s in a spreadsheet, and analyze them to my heart’s content. Understanding that I had more processing power on my desk than the freezer-sized unit in the other room was eye-opening.

American manufacturing companies (at least) never got the message. The invention of the spreadsheet spared them from facing the fact that the mainframe had become the black hole of their IT world. As changes were becoming impossible to get from the mainframe group, PC’s with Windows and Excel allowed people at all levels and in all job functions to start working around the mainframe and its limitations.

Now, these kinds of companies are decades behind the curve. They thought “outsourcing” was going to fix all of their problems. When it didn’t, they thought “consultants” would be the trick. Surely “agile” will do it this time, right? No. It’s not the process; it’s the mainframe. Forcing every corporate workflow and piece of data to be kept canonically inside a 40-50-year-old legacy system’s limitations is quite literally killing the company. It’s certainly killing their competitive advantage.

My current company still breaks our primary software component into 8 pieces because that’s what would fit on floppies to send to the plant to program the hardware. Every IT system — and every spreadsheet — in the company has to deal with this 40-year-old legacy issue because that’s what we programmed the mainframe to expect, and now that’s the only way to bill a customer for it. So the logistics of dealing with multiple trees and branches of software (and multiple trees and branches of documentation about the software) is multiplied by a factor of 8 to this day. There is no escape from this black hole. You can’t re-engineer this situation. It’s too ingrained.

I worked for one group which, on every engineering release, had to get a giant table of software versions — each with their 8 part numbers — into the mainframe. The process was so onerous that they would spend days clicking through mainframe terminal emulator screens to get the information they needed, to make a spreadsheet in a particular format, which they would send to another group to actually enter back into the mainframe. Part of the problem was the spreadsheet had to be in 3 columns, but the mainframe screens were in 4 columns (or vice versa), so a lot of it was purely formatting. I wrote a little program to automate all of this, but I’ve left the group, and I’m sure no one uses it any more. The particularly stupid part of this story is that people fought me to write a little software that saved these people 10’s of hours a week in the name of their own job security.

And no one in the corporate hierarchy cares. In this day and age, the executives are all just playing the waiting game, letting things atrophy — saying all the right things publicly — while they wait until the financials are inverted enough to make the company a juicy prospect for a buyout in an industry-wide rollup by private equity.

Meanwhile, actual people have to get stuff done to stay employed and feed their families. Inside the company, the managers have to look at the three year lead times to get a simple application written by “corporate IT,” and can do nothing but just continue to throw bodies and VBA macros at it. Or, in my case, have me write something to do it. That is, until it gets successful enough that people notice, and it gets taken away from me, but that’s another story…

All Your Base Are Belong to Us

If you have a corporate- or school-issued computer, you have no control over it. Unless you wipe it and reinstall the OS, and even then, of course, they could leave things in the BIOS, and probably do. Then again, you barely “own” devices you buy, but that’s another rant. Here’s the task list for my corporate laptop.

Sigh

So let’s see…

  • Seven different reports about what I’m uploading to OneDrive.
  • Five jobs to keep Chrome and Edge up to date. Firefox and IE are also installed.
  • A job to make sure you keep Zoom around.
  • A [REDACTED] hourly job to make sure you haven’t elevated your privileges.
  • A job to make sure you haven’t (apparently) installed the npcap library. I mean, God forbid you should try to use this at a corporate site, which has probably used switching since… 1996 or so.
  • Three other [REDACTED] jobs to make sure you don’t do other things “they” don’t want you to do.
  • At least 5 jobs to make sure you don’t change… anything about the way they’ve installed Office, apparently.

Thirty one jobs. Only one of these is mine, to do the one thing I need this (secondary) computer to do.

This machine bypasses my carefully-curated and ad-blocked local DNS. I don’t know what it uses for DNS, but I see that it doesn’t operate over port 53, and I don’t care to know any more.

It also won’t print to a printer in your house. I think I tried to print to a printer at the office once, and give up after one try, because I knew it was going to be futile. Basically, no one prints anything. They must save a TON on printer costs as a company. Most printouts are a waste of resources anyway, so this might actually be genious.

Microincentives and Enshittification – Pluralistic: Daily links from Cory Doctorow

That increased profitability can only come from enshittification. Every product manager on Google Search spends their workdays figuring out how to remove a Jenga block.

Source: Microincentives and Enshittification – Pluralistic: Daily links from Cory Doctorow

Internally, every powerful person at Google is committed to ensuring that their rival-peers don’t stake out fresh territory as their own. The one thing every top exec can agree on is that the one guy who’s trying to expand the company into an adjacent line of business must not succeed.

What’s worse, these princelings compete with one another. Their individual progression through the upper echelons of Google’s aristocracy depends as much on others failing as it does on their success. The org chart only has so many VP, SVP and EVP boxes on it, and each layer is much smaller than the previous one. If you’re a VP, every one of your colleagues who makes it to SVP takes a spot that you can no longer get.

Those spots are wildly lucrative. Each tier of the hierarchy is worth an order of magnitude more than the tier beneath it. The stakes are so high that they are barely comprehensible.

That means that every one of these Jenga-block-pulling execs is playing blind: they don’t — and can’t — coordinate on the ways they’re planning to lower quality in order to improve profits.

I don’t know if I’ve read a clearer description of the things I’ve seen in 30 years of (mostly) Fortune 250 corporations. Over the course of my career, about a half dozen of my successful software projects — with many happy users — have been sabotaged because they made someone else look bad, or just had the unacceptable side-effect of making me look like I knew what I was doing. Seriously. I could write a book.

Hey, I got paid, and have had a comfy ride along the way. What could I expect as a developer toiling away in the bowels of some faceless blue chip corporation? The only thing the kind of companies I have worked for could offer would be a role with more responsibility but no more pay. Uh… pass.

What really frustrates me in all of this is the tireless effort and work to make sure that software never actually improves workflows or processes for the company, so that eternal middle managers can preserve their tiny little fiefdoms. Sure, we’ll make some software to do something, but by the time the managers “manage,” the people who don’t do the job write the specs, the outsourced programmers who don’t understand anything about the process write the application, the years go by, and the poor schmucks who have to use the thing sign off on the acceptance testing, just to move on, everyone is left with a piece of crap they can’t stand to use, and they wonder why anyone bothered. They’d have been better served just continuing to use horrific, shared Excel spreadsheets.

Google spends a whole-ass Twitter, every single year, just to make sure you never accidentally try another search engine.

I never want to hear another word about what else Elon Musk could have done to supposedly improve the world with the money he spent buying Twitter.

Likewise Google/Apple’s mobile duopoly is more cozy than competitive. Google pays Apple $15–20 billion, every single year, to be the default search in Safari and iOS. If Google and Apple were competing over mobile, you’d expect that one of them would drop the sky-high 30 percent rake they charge on in-app payments, but that would mess up their mutual good thing. Instead, these “competitors” charge exactly the same price for a service with minimal operating costs.

Since the 80’s, American corporations have learned to toe the precise line that will allow them to point fingers at their “competitors” in court to wriggle out of the en vogue legal definition of monopoly, but it’s all such a naked joke. The app stores are the same way. It is a certainty that very-high level execs at Apple and Google have concluded to keep their fees the same, so that the market for app development doesn’t actually work, and is anything but “free.”

Another Day, Another Boneheaded Move by #CorporateIT

I’ve been having mysterious problems with both of my corporate computers. Things that used to run only sort of run now. Today, I finally figured out that this is happening because #CorporateIT, in its ineffable wisdom, has decided to suddenly start automatically deleting any customizations to either the system or the account PATH variable by way of login (or logoff, or startup, or shutdown) scripts.

Years ago, Arvin was a lovely company with lovely people. Then it was sold out from under us, and eaten alive by Meritor (which has now been eaten by Cummins). They made a big show of bringing in some bonehead whose job was to setup “proper” IT policies. I watched in horror as he obviously just slapped together a bunch of white papers he rummaged through the internet to find, copy-and-pasted them into “controlled” Word docs with company logos in the header, and presented them as a legitimate security posture, despite obvious problems and glaring inconsistencies. Unintimidated, I took him to task about it. We went a couple of rounds, which ended with him literally screaming at me over the phone. I finally got the attention of one of the senior IT directors, and got a chance to vent about the situation.

One of the things I complained about was the removal of cron from all Unix machines, which I (as a Unix admin, at the time) was making liberal use of. First, cron doesn’t allow you to do anything you couldn’t normally, otherwise, do, so why remove the convenience? Second, if running things out of hours or on a schedule is a Bad Thing (TM), then why weren’t we also removing Task Scheduler from all Windows machines? Third, if this is about a security vulnerability in the binary, then just make sure you’re keeping up to date with patches from the vendor, just like everything else.

The director then told me that that particular policy provision was actually written by her, as though this was supposed to make me suddenly backtrack, and withdraw my objection. I asked her why, and all she could do was say that this was considered an “industry best practice.” Yeah, but why!? The bottom line is that this was an unintended consequence of SOX. It’s just a thing that’s easy to suggest by consultants, easy to do by IT staff, and easy to verify, and makes a nice bullet point on a validation study about IT policies. Job done! Give IBM $100K to rubber stamp our SOX compliance report! But it does literally nothing to “secure” anything. All it can do is inconvenience users.

If there’s an actual security flaw in the cron deamon itself, then get it patched! There’s no reason to eliminate it entirely. At least, it’s not worth the inconvenience of uninstalling it on the slight chance that a new vulnerability might be found in it, and get exploited by a bad actor, before it can be patched.

This is a hill I will die on.

I got my cron back.

Today’s issue with #CorporateIT is the same. Now I can’t run rails or rake or git at the command line unless I fully “path” them. This is what has been breaking my scripts. And I know they’re nuking both system and user PATH variables, because I tried the second after noticing that the first was being blown away. Why in the world are we deleting customizations to the PATH variable? On what planet does this make anything more secure? What malware wouldn’t try all known paths, regardless of the PATH setting, or fully path its own executables? How can this do anything but make people’s lives less convenient? It’s still possible to set, of course, so I guess I’ll write a .BAT script to run when I want to start working, which will update my user PATH variable so I can just get on with it.

Wow. We’ve really locked down the configuration, huh, guys? The bad guys have no chance now!

To me, the implementation of any security measure depends on the answers to some fundamental questions: What’s the vulnerability? How large is the risk? What’s at stake? What is the mitigation? Is the preventative fix worth the cost in terms of money, access, and productivity? What’s the data we are protecting worth, such that it makes sense to implement the policy? I understand there’s a lot of subjectivity here, but these questions will separate the wheat from the chaff really quickly.

For instance, the staggering mountain of PowerPoint presentations that no one having a meeting can seem to do without, sitting on the corporate file server, mean nothing to anyone outside of the people who are having meetings about it, and even then, only for the week they’re having the meeting. Does it make sense to install every security product on the market to protect this “information?” Not in a million years. Even Office documents you think are profoundly important are hard to dig up out of your collection after a little while, and hard to make sense of once you do. How would any of this “data” be strung together in any useful way by bad actors? For all of the hand wringing about it, the shared drives could be open to the public, for all the risk to the company it actually exposes.

I have another story about this, but I’ll save it for another time.

Every time we turn around, IT has implemented a new policy, a new layer, a new product that’s supposed make our “data” “more” “secure,” and each time it happens, we lose the ability to do something useful. #CorporateIT dictates that our Teams chat histories vanish after just 24 hours. In a company which requires a month for anything to get done, and often requires multiple tries, it would be nice to be able to refer to that log for a month, no? Does no one in the company see this? What sort of crack-addled meeting was held between legal and IT to come up with this? Deleted email disappears after 30 days. If you want to save it to refer to later, you need to remember to hit the “archive” button. Again, when things take months to happen… But sure, blame it on litigation

The really stupid part of this? These moves won’t save you legally. People involved what whatever is being discovered will be called to testify, under oath, what they said, regardless of records that attest to it. So this does nothing to prevent legal culpability. It’s just another hassle for end users in the name of a tick box on an auditor’s checklist.

Every week, there’s a new thing to justify a budget. Every week, it’s a new, unannounced loss of capability. I’m really getting tired of it.

Update

About a week after I wrote this, a coworker sent out an email to our entire group, saying that hundreds of thousands of documents we still rely on had been automatically deleted from our Sharepoint files and Teams channels. He said that they have restored these things, and he was working with IT to make the auto-delete policy kick in at 10 years, instead of the current 3. This is exactly what I’m talking about when I say that, if a company moves at a pace where even the simplest things take a month or three to do, then we need chat history to last at least this long. Our projects are sometimes decades long. We need our stuff for at least that long.

This is a perfect example of IT setting “security” policy without asking the basic questions above, and living in a fantasy world where they are free to believe that their consultant-and-whitepaper-suggested rules don’t have costs. At least my coworker didn’t throw up his hands, and say (basically), “You can’t fight city hall!” He took them to task, and now they’ve had to realize, in at least this one case — for, again, no actual legal benefit — the utter hassle they incur when their incentives are misaligned with the people who do the work that keeps them employed.

Update 2

Here we go again

Now people are educating each other about how to save important documents from being automatically trashed from OneDrive.

Tax Exemption for Churches (Is the Wrong Question)

For the many-th time, I see a repost from Twitter on some other social media site, complaining about the wealth of mega-church pastors, and trying to rile people up about how churches should NOT be tax exempt. And, sure, Joel Olsteen’s lifestyle is a mockery of Jesus’ life, but there are only a handful of “mega” churches and “mega church” pastors in this country. Meanwhile, many, many thousands of the so-called 1% in this country pay a lower tax rate (and sometimes, ACTUAL tax) than the average, blue- or white-collar person does.

As a country swimming in debt, we would get a lot more mileage out of calling for meaningful taxation of billionaires and multi-hundred-millionaires before we start worrying about removing tax exemptions for churches and pastors. I think those posts and reposts on Twitter are probably jointly paid for by The Koch Brothers and George Soros, for the class-warfare angle. And maybe Bill Gates, for the anti-religion angle.

Joel Osteen pays taxes on his income. How much of it he has managed to shelter from the IRS is a game played just like all the rest of the 1%. The church, as a non-profit, does not pay taxes, because the money being received in donations cannot be considered a profit to tax. That’s the definition of how non-profit organizations work.

Churches are supposed to be prevented from getting involved in politics. It’s part of the deal in being religiously tax-exempt. (How this works when Presidents and candidates go to churches and make speeches from the pulpit is quite beyond me, but I digress.) If you start taxing churches, then there’s no reason for them not to get heavily involved in promoting particular candidates, and forming political action committees, just like corporations, taking an active role in getting people elected, and lobbying government for favorable treatment.

You may retort that large, corporate churches like the Catholics or Mormons already exert a huge influence on government, and I’d say you’re right, but it’s still less than the average Fortune 100. If we open the floodgates here… With the “war chests” accumulated by both of those organizations? As they say: you ain’t seen nothing yet.

Do the people calling for the removal of tax exemptions for churches really understand what they’re asking for? I don’t think they do.

Corporate IT “Automated Systems”

Today, I was contacted by #CorporateIT as to whether I was still using <expensive software>. I said no, and that I had tried to uninstall it, but it didn’t work. And, by the way, I’ve tried to “surrender” several other applications, so that my department is no longer billed for them, and NONE of them have worked.

So #CorporateIT guy forwards my email to <IT Director>. I’ve worked at Cummins for 10 years, and still can’t figure out the organizational structure. Anyway, he explains that their systems are great, and process 12,000 requests per month without any problems. I thank him for the considerate response, but this doesn’t change the fact that this has never worked for me, not even once.

Then I get dressed, go into the office, and try to “surrender” one VERY expensive piece of software from a machine that needs to be retired, and I get this error message. Now, I understand that these are (probably) not the same systems under the skin, but it’s the same aggravation, and I just wish that the people running these systems lived in the same IT world that the rest of us do.

Corporate IT “Support”

Suppose you have a problem on your company laptop, for which you contact #CorporateIT, using Microsoft Teams. Now, you’re already logged in as yourself, in Teams, but the first thing they always ask is to confirm that it’s… you, who is contacting them. Then they always ask whether you’re at a company site or home, and what your phone number is. Now, about 90% of everyone is working from home, and with the VPN, it wouldn’t matter anyway. Also, they have no need to call you on the phone. In the rare event they want to use voice, they’ll just use Teams!

So, given the lag of getting a person from the queue, and the normal flow of question and response, you’re about 5-10-15 minutes into the process, and you have wasted the entire time by answering three stupid questions. But if you don’t respond in about 10 seconds, you’ll be badgered with, “Are we still connected?”

After your chat, you’ll get prompted to rate support’s “help” in Teams, and then you’ll get about a dozen emails — whether or not they helped you in any way — including ANOTHER prompt to rate their helpfulness. And they are judged on this. I once rated a support tech poorly, because he was completely unhelpful, and didn’t even try to escalate the problem. He contacted me back to argue about it, and couldn’t disagree with my rating, but pleaded with me to change it, because that’s how they stay employed. I did, but I just don’t bother with the ratings any more.

AI and the Big Five – Stratechery by Ben Thompson

Mobile ended up being dominated by two incumbents: Apple and Google. That doesn’t mean it wasn’t disruptive, though: Apple’s new UI paradigm entailed not viewing the phone as a small PC, a la Microsoft; Google’s new business model paradigm entailed not viewing phones as a direct profit center for operating system sales, but rather as a moat for their advertising business.

Source: AI and the Big Five – Stratechery by Ben Thompson

I think it’s worth noting something here. Just before this paragraph is this:

The PC was disruptive to nearly all of the existing incumbents; these relatively inexpensive and low-powered devices didn’t have nearly the capability or the profit margin of mini-computers, much less mainframes. That’s why IBM was happy to outsource both the original PC’s chip and OS to Intel and Microsoft, respectively, so that they could get a product out the door and satisfy their corporate customers; PCs got faster, though, and it was Intel and Microsoft that dominated as the market dwarfed everything that came before.

It seems to me that Microsoft was guilty of the same sin as IBM when it came to mobile. IBM viewed PC’s as tiny little mainframes. Microsoft viewed “smart” phones as tiny little PC’s.

Whenever people write like this, it nags at me, that a massive, multinational corporation’s motivations could be represented by a single viewpoint, held by a single person. But then I force myself to relax, realize that the organization’s actions really do boil down to being explained like this, and commit to the simplification for narrative purposes. So, acknowledging this… What “IBM” couldn’t “see” was that, while “limited” in relation to a mainframe, the PC was capable enough to do things that mainframes couldn’t do. I’ll never forget the Aha! moment I had in my first engineering job. “I was there, Gandalf;  3000 years ago.”

I was working for a small (80-ish people) company that made air compressors. They had just gotten bought by a huge, multinational air tool conglomerate, and the former owner had spun off a tiny portion of the tiny business to a new, separate company. As part of the new owner’s investment, the company was buying new PC’s for “the office.” Five of us got new, genuine IBM, i486DX2 66 PC’s with all the goodies, including real IBM Model M buckling-spring mechanical keyboards. They were glorious.

In an old garage, next to the main building, was a pile of “stuff” leftover from the rearrangement. In that pile, I found an internal, 4800 bps modem, and a full-length “mainframe” card, for attaching to a token ring network, and emulating a terminal. I installed both into my PC, and got my boss to let me get a Prodigy account. (And discovered Doom.) The “mainframe” card allowed me to connect to the mainframe, but I don’t (and still don’t) know anything about mainframes, so I just left it there.

Then my boss asked me to do a BOM comparison between 2 similar compressor models, and pointed me at two giant, mainframe printouts on green-barred, spoked paper, in those terrible binders with the variable-length metal straps to hold them together. They were about 2 inches thick. I started to compare the paper reports for about a minute before I had a thought…

I got the lady who ran the mainframe (an IBM System/36) to make me BOM reports for both compressor models. This apparently took an entire program to be written, and it was no wonder that mainframes were already dying by 1993, but I digress. I was able to download the reports to my PC over the “mainframe” card. Of course, these reports, being simple lines of text, were only a megabyte or so, but I had eight megabytes in my fancy new PC! So I was able to import both BOM’s into Quattro Pro, and do some spreadsheet manipulation to show the differences.

This sort of simple, quick, ad-hoc query and reporting capability, enabled by spreadsheets, has been the backbone upon which all corporate business has been run for almost 30 years. A lot of company data now lives in cloud services, which have their own query and reporting tools, but my perception is that Excel is still a core tool that the majority of people in the Fortune 1000 are using to manage their workflows. Like, you could take away literally everything else but Excel and email, and you’d be fine. It would take some adjustment, of course, but the business would carry on. That’s how critical it is.

IT managers in large corporations like to think that their multi-million dollar IT systems are special, and there’s an attitude that the company couldn’t exist without them now that they’ve been implemented. Entire kingdoms are built around them in the modern, corporate, fuedal-like system present in every Fortune 1000. However, the people running these systems don’t seem to understand that there is invariably enormous activity in the company devoted to shoring up these systems with ad-hoc tools in Excel, simply because the team responsible for the system will never have the time to implement the customizations the users need to make the system truly useful for their work. At least, if they do know it, they ignore it, and they can, because they are not held accountable for the vast quantities of technical debt and wasted work because of their compromised implementation, which stopped short of all the promises upon which the system was sold to the monarchy. The true costs were never actually presented, and now that “shortage” not only gets spent, but gets duplicated all over the company, because spreadsheets do not “scale.”

I didn’t start out to make that point, but this is why I write: to “work out my thinking,” as I state in the sub-title of this blog.

Why do I know? Because if I could sum up my 27-year career, the central theme of it would be creating applications to replace terrible, shared Excel spreadsheets with — hopefully, less terrible — web and native applications, tailor-made for the workflow the spreadsheets were supporting. I can count 13, right off the top of my head, and I’m sure I’m forgetting some of the smaller ones. I’ve spent about 21 of my years in Fortune 250’s, so maybe I have a jaded view, but my feeling is that this extrapolates through all big companies, world-wide.

This is what IBM missed. People know what they need, and will use “manual” effort to get around corporate IT lethargy. At first, it was  routing around mainframes, and their impossibly slow development times. Now it’s every large, “corporate” system, like CRM or ERP or PDM, and their impossibly slow development times. The limitation of “the mainframe” wasn’t in its hardware or its development language, it was in the system of fiefdom that is the corporate budget allocation system, and the unintended consequences it produces, specifically the unaccountability inherent in the fact that the monarchs can’t understand the technical and logistical limitations in customizing a large system, and the true costs are therefore elided in the endless budget cycle. And when an aging system is deemed fit to retire and replace, the whole cycle starts all over, with corporate IT creating a system just shy of what’s really needed, and end-users creating spreadsheets to backfill the gap.

A lot of these kinds of systems — particularly HR — have been moving to the cloud. Why? In my estimation, it’s not because they’re cheaper, even on paper. It’s because those systems are fully-formed, and include all the end-user-facing querying and reporting needed to make the system useful for every requirement. Fortune 500 companies could have made a streamlined version of, say, Workday, for their specific, internal use, but corporate IT — as a standalone, ivory tower, ultimately beholden only the the CEO, who couldn’t care less — could never figure out how to work closely enough with the user community to address all of their needs. So now, users have to put up with yet-another-end-all-be-all system, designed to address the needs of every company on earth. But! At least, once they figure out the workflow to get what they need, it’s all downhill from there. Here’s the key: at least it’s possible without Excel.

More and more workflow operations will continue to expand into cloud-based services, but it’s only possible to do this with services every company needs. This is why we’re seeing a deluge of advertising for HR apps, even on TV, each designed to hit a different company size and price point. It’s not possible to do this with, say, PDM applications, so companies like mine are going to continue to be hamstrung with a systems like Integrity/Windchill. On the one hand, it’s become an important tool which must be used to get products out the door. On the other hand, it doesn’t do a whole bunch of stuff people really need it to do with the data it already has — and it never will — so there are a whole bunch of Excel spreadsheets running loose in the company that duplicate the data, waste the manual effort, and do the things that need to be done, which IT has no knowledge of, and does not care about, because it doesn’t show up as a liability against their budget. And the situation will continue, for the foreseeable future.

Corporate IT, NodeJS, “Tech” Companies, and Freaking Microsoft Windows

The Scene

A few years back, as part of a long, slogging series of unfortunate events, I had been tasked with developing a new web application, which circumstances dictated should be written in Java. Books could be written about this one-year period of my career. (And not, like, inspirational ones.) Anyway, part of the process included trying to get people to realize that no one, these days, wrote web apps in Java without using one of the many, popular Javascript libraries for the front end (like React or Angular), and get my management and corporate IT to understand that I needed to install NodeJS on my machine to facilitate this. Up until this point — and despite the fact that it was obviously used by other development teams in the company — it was not on the “approved” list of software to be installed on local machines. Through several strained meetings and rounds of email, someone, somewhere, deep in the bowels of IT, corrected the obvious oversight, and put it on the list.

The production version of NodeJS was 8, at the time of approval.

This kerfuffle was but one small facet in the gem that was this job posting. In the middle development process, I jumped at another job opportunity, and left my Fortune-250 for a different Fortune 250. The IT environment was eerily similar, and led to this post about making Windows tolerable. It was this experience that got me to see the real root of what I’m complaining about here.

And then, through a short series of more unfortunate events — and one amazing event — I came back to the original Fortune 250, in a different department.

Some months later, just after getting settled back in, I got an email asking me if I would approve a new version of NodeJS to be officially blessed and uploaded to the internal repository.

A Symptom, not the Disease

Strangely, I was being asked to approve NodeJS version 9. If you’re not familiar, NodeJS uses a version numbering system like the Linux kernel used to, where even-numbered releases are for production use, and odd-numbered releases are development versions, intended only for development of the software itself. In no way should 9.x be considered for use in projects inside a blue-chip Fortune 250.

I explained this situation to a laundry-list of TO: and CC: recipients in a long email thread that had already been making rounds inside the company before someone finally saw my name attached to the original request, and added me to the chain. Of course, my explanation was ignored, but I only discovered this 6 months later, when I was being asked, again, to approve version 9. Apparently, I was preventing some developer in India from doing his work on a “high priority project” by not having approved it already, and I needed to get on the stick.

I become more blunt, at that point. First, I didn’t do whatever was done to get it certified the first time, so I didn’t know why I was being called on to do it again. Second, I tried to make a case for exempting development libraries, like NodeJS, from the slow process of getting them approved for internal use, and uploaded to our internal software delivery site. This led to another important person added to the chain, who, surprisingly, supported my argument, but, again, nothing changed.

A month later — seven months into this “discussion,” and presumably still holding up a “high priority” project with a “requirement” for 9.x — I got another email, which included a screenshot of an error from Angular, saying that it no longer supported NodeJS 8.x, and that it needed at least version 10.x or 12.x. Again, I pled with the list of people involved in the email chain that we needed to treat development libraries and applications differently than we treated, say, Office applications. I pointed out that, in the time that we had been fussing over version 9, version 14 was now shipping.

Six months after this exchange, I got an email from a desktop support technician. He was asking for clarification about details when installing… wait for it… version 8 on a developer’s computer. That’s right: After over a year of this exercise, we were still fighting to get a version that’s now a year and a half out of support installed on a developer’s machine.

And then, the situation actually got even worse. The developer’s “computer” was really a shared environment (like Citrix, et. al.), and the shared NodeJS install was being constantly re-configured between multiple developers using the same computer between projects. The support person was actually savvy enough to have suspected this, and was asking me about how it worked. I confirmed that this would, indeed, be a problem, and we figured out the flags to install it into each person’s personal directory, and keep the node_modules directory separate, per user. So, at least we figured out how to successfully install a version of Node that was dangerously out of date to a shared computer.

Actually trying to use NodeJS for the job it was created for, and downloading a stack of Javascript libraries to support Angular or React, led to another discussion of how to get it to play nicely with our corporate, Active Directory-authenticated firewall, which — naturally — blocks all access to the internet from anything that doesn’t run through the Windows TCP/IP stack. Say, like npm or yarn trying to access the NPM repository. I had figured out a workaround for that in the first few months of working at the company, and just pointed them at Corkscrew, which transparently handles the NTLM authentication for command-line utilities like npm (or Ruby’s Bundler).

The Root of the Problem: Microsoft, and Windows

If the shared computer had been Linux or Mac, none of these problems would have existed. Each account on Linux and Mac has a proper personal directory, and things like Node and Ruby assume this, and take advantage of it. Each user could install whatever he wanted to in his home directory, and not need administrative permissions on their machine, or have to rely on some internal application-distribution site. Also, if developers could use anything other than Windows, corporate IT would probably not assume that everything which gets forced through the corporate firewall can do NTLM authentication, and force people running tools like NodeJS to rely on a squirrely tool like Corkscrew. Windows has gotten a lot better over the past several years about installing things into a user’s AppData directory, and Microsoft has spent a lot effort in recent years to develop and astroturf WSL(2), Visual Studio Code, and the new Terminal, but Windows is still a second-class citizen for modern web programming.

I try to temper my frustration with this situation with the knowledge that IT departments of large companies have been forced into many, cascadingly-obtuse compromises by their use of Windows. So many frustrations in a company’s user community can be traced back to the relatively quirky, and single-user-oriented way Windows has always worked, and the monoculture that using Windows requires, thanks to Microsoft’s legacy of embrace-and-extend, especially in directory services. The size of the company exacerbates the problem. At my current company, I know of at least 5 different IT org trees. After 6 years of working with various people in these groups, I still have very little understanding who actually owns what. To be fair, most of this is felt by only a small portion of the “power user” community at a company, but that’s most of the people I deal with.

The Distortion of Scale

The biggest problem here is the scale of the operation. When you have 50,ooo nails, you make sure they’re all the same size and finish, and you use the exact same kind of hammer and technique on all of them. You’d think it would be possible to use a bit of manpower in these various IT departments to treat some of these nails differently, but the vast ecosystem required to take care of Windows just eats up all available resources. Anti-virus. VPN. Standard desktops. Scripts to prevent people from doing things they shouldn’t. Scripts to report all activity on the things they should. Office 365. One Drive. Teams. Zoom. Forced password rotations. Worldwide hardware and software upgrades. Locking out how long the screensaver takes to kick in. Preventing changing of custom login screen backgrounds. It’s a lot. I get it. Using Windows as a corporate desktop environment automatically assumes so much work, it leaves little room for treating a computer like a tool that needs to be customized for the job it needs to do, and the work it needs to support, even when those goals are, ostensibly, incidentally, also primary goals of the larger IT organization. It’s a counter-intuitive situation.

I started this post by pointing out that this stack of regrettably-predictable compromises, which result in suboptimal policies and outcomes, is primarily a problem with traditionally non-“tech” companies, but the real, underlying problem is much deeper.

The truth is that all companies are now “tech” companies, whether they realize it or not. And those that can’t change their approach to IT to adapt to this new reality — or change it fast enough to matter — will wither on the vine, and their remaining assets, eventually, will be picked up in a corporate yard sale to companies that have “tech” embedded in their DNA from birth.

I worry that a company which, 30 years later, still breaks up it’s most-important digital asset into 8 pieces because that’s what would fit on a floppy disk will not make the turn in time.

The reason I started writing all of this down was because — after all of this time and discussion — I was asked to approve NodeJS version 10 for the internal software repository. At the time I was asked, version 10 didn’t even show up on the NodeJS release page any more. They were shipping version 16. I guess 10 is better than 8, but let’s be honest: The only reason they gave up on version 8 or 9 is because the version of Angular that they’re using is refusing to work with anything pre-v10. That happened back in Angular version 8, which is now also out of support.

As part of the great email chain, I pleaded with the various people involved with the internal software approval process that keeping up with the shifting versions of your tools and supporting libraries is just part of the job of being a web app developer, yet no one even batted an eye. You would have thought that this concept would have fallen directly under the multi-headed hydra of “security,” and the company’s philosophy seemed to be you can never have too many software layers or policies about it. You would have thought they would have pounced on the concept in order to at least seem serious. I even invoked the specter of the recent, infamous log4j bug, as an example of the risks of letting things get out of date. This issue caused an audit of every Java-based application in the company, so it should have been a touchstone issue which everyone in the chain could relate to. But if anyone could understand what I was trying to say, they apparently didn’t care.

IT Best Practice vs IT Policy

I didn’t much care for The Big Bang Theory, but one scene has stuck with me for a long time. In S1E16, Sheldon is shopping in a store like Best Buy, and some woman comes up to him and asks, “Do you know anything about ‘this stuff?'” He replies, “I know… everything about ‘this stuff.'” And that’s the heck of this situation. It’s almost like every single person concerned with this process has absolutely no idea how any of “this stuff” actually works, and won’t listen to someone who does. And I realize how conceited that may sound, but, in this case, I don’t know how else to put it.

The only other explanation is simply apathy in the face of bureaucracy, and I wish senior IT management would take it on themselves to root out this sort of intransigence, and fix it. It would seem to be their job, and would go a long way to justifying a C-level salary. Unfortunately, this isn’t the first time I’ve found myself trying to explain a direct contradiction of IT best practice versus IT corporate policy to the very people who are supposed to be in charge of both, and I’d like to think I’ve learned how to convey my thoughts in a less confrontational way, but I obviously still haven’t figured out how to motivate people to rise above the internal politics and align the two, and that makes me sad.

I’m finally posting this because I just got another request to approve version 8, now three and a half years on, and I needed to vent.

¯\_(ツ)_/¯

Update 1

A couple weeks after posting this, I got CC’d on a long desktop support email chain from a developer in India who can’t get angular-cli version 7.x working with npm. <sigh> And there are four references to how urgent and how high a priority this is. A simple search shows a pretty detailed SO post about the particular error message, and the general answer seems to be to either play games with the particular versions of the dependencies, or just upgrade to a 8 or 9… three years ago. In any case, this isn’t a desktop support question. IMNSHO, this is squarely a developer’s issue. Sorry, but that’s the job, brother. Do I try, feebly, to make another point, or just let this go?

Update 2, eight months later

Because everyone got new laptops, I was looking around the internal company web page for software installation. And what do you think I happened to see? That’s right! Got it in one try! To be fair, there’s a newer version, but this version should simply not exist, anywhere, for any reason, at this point.

Still There