Another Day, Another Boneheaded Move by #CorporateIT

I’ve been having mysterious problems with both of my corporate computers. Things that used to run only sort of run now. Today, I finally figured out that this is happening because #CorporateIT, in its ineffable wisdom, has decided to suddenly start automatically deleting any customizations to either the system or the account PATH variable by way of login (or logoff, or startup, or shutdown) scripts.

Years ago, Arvin was a lovely company with lovely people. Then it was sold out from under us, and eaten alive by Meritor (which has now been eaten by Cummins). They made a big show of bringing in some bonehead whose job was to setup “proper” IT policies. I watched in horror as he obviously just slapped together a bunch of white papers he rummaged through the internet to find, copy-and-pasted them into “controlled” Word docs with company logos in the header, and presented them as a legitimate security posture, despite obvious problems and glaring inconsistencies. Unintimidated, I took him to task about it. We went a couple of rounds, which ended with him literally screaming at me over the phone. I finally got the attention of one of the senior IT directors, and got a chance to vent about the situation.

One of the things I complained about was the removal of cron from all Unix machines, which I (as a Unix admin, at the time) was making liberal use of. First, cron doesn’t allow you to do anything you couldn’t normally, otherwise, do, so why remove the convenience? Second, if running things out of hours or on a schedule is a Bad Thing (TM), then why weren’t we also removing Task Scheduler from all Windows machines? Third, if this is about a security vulnerability in the binary, then just make sure you’re keeping up to date with patches from the vendor, just like everything else.

The director then told me that that particular policy provision was actually written by her, as though this was supposed to make me suddenly backtrack, and withdraw my objection. I asked her why, and all she could do was say that this was considered an “industry best practice.” Yeah, but why!? The bottom line is that this was an unintended consequence of SOX. It’s just a thing that’s easy to suggest by consultants, easy to do by IT staff, and easy to verify, and makes a nice bullet point on a validation study about IT policies. Job done! Give IBM $100K to rubber stamp our SOX compliance report! But it does literally nothing to “secure” anything. All it can do is inconvenience users.

If there’s an actual security flaw in the cron deamon itself, then get it patched! There’s no reason to eliminate it entirely. At least, it’s not worth the inconvenience of uninstalling it on the slight chance that a new vulnerability might be found in it, and get exploited by a bad actor, before it can be patched.

This is a hill I will die on.

I got my cron back.

Today’s issue with #CorporateIT is the same. Now I can’t run rails or rake or git at the command line unless I fully “path” them. This is what has been breaking my scripts. And I know they’re nuking both system and user PATH variables, because I tried the second after noticing that the first was being blown away. Why in the world are we deleting customizations to the PATH variable? On what planet does this make anything more secure? What malware wouldn’t try all known paths, regardless of the PATH setting, or fully path its own executables? How can this do anything but make people’s lives less convenient? It’s still possible to set, of course, so I guess I’ll write a .BAT script to run when I want to start working, which will update my user PATH variable so I can just get on with it.

Wow. We’ve really locked down the configuration, huh, guys? The bad guys have no chance now!

To me, the implementation of any security measure depends on the answers to some fundamental questions: What’s the vulnerability? How large is the risk? What’s at stake? What is the mitigation? Is the preventative fix worth the cost in terms of money, access, and productivity? What’s the data we are protecting worth, such that it makes sense to implement the policy? I understand there’s a lot of subjectivity here, but these questions will separate the wheat from the chaff really quickly.

For instance, the staggering mountain of PowerPoint presentations that no one having a meeting can seem to do without, sitting on the corporate file server, mean nothing to anyone outside of the people who are having meetings about it, and even then, only for the week they’re having the meeting. Does it make sense to install every security product on the market to protect this “information?” Not in a million years. Even Office documents you think are profoundly important are hard to dig up out of your collection after a little while, and hard to make sense of once you do. How would any of this “data” be strung together in any useful way by bad actors? For all of the hand wringing about it, the shared drives could be open to the public, for all the risk to the company it actually exposes.

I have another story about this, but I’ll save it for another time.

Every time we turn around, IT has implemented a new policy, a new layer, a new product that’s supposed make our “data” “more” “secure,” and each time it happens, we lose the ability to do something useful. #CorporateIT dictates that our Teams chat histories vanish after just 24 hours. In a company which requires a month for anything to get done, and often requires multiple tries, it would be nice to be able to refer to that log for a month, no? Does no one in the company see this? What sort of crack-addled meeting was held between legal and IT to come up with this? Deleted email disappears after 30 days. If you want to save it to refer to later, you need to remember to hit the “archive” button. Again, when things take months to happen… But sure, blame it on litigation

The really stupid part of this? These moves won’t save you legally. People involved what whatever is being discovered will be called to testify, under oath, what they said, regardless of records that attest to it. So this does nothing to prevent legal culpability. It’s just another hassle for end users in the name of a tick box on an auditor’s checklist.

Every week, there’s a new thing to justify a budget. Every week, it’s a new, unannounced loss of capability. I’m really getting tired of it.

Update

About a week after I wrote this, a coworker sent out an email to our entire group, saying that hundreds of thousands of documents we still rely on had been automatically deleted from our Sharepoint files and Teams channels. He said that they have restored these things, and he was working with IT to make the auto-delete policy kick in at 10 years, instead of the current 3. This is exactly what I’m talking about when I say that, if a company moves at a pace where even the simplest things take a month or three to do, then we need chat history to last at least this long. Our projects are sometimes decades long. We need our stuff for at least that long.

This is a perfect example of IT setting “security” policy without asking the basic questions above, and living in a fantasy world where they are free to believe that their consultant-and-whitepaper-suggested rules don’t have costs. At least my coworker didn’t throw up his hands, and say (basically), “You can’t fight city hall!” He took them to task, and now they’ve had to realize, in at least this one case — for, again, no actual legal benefit — the utter hassle they incur when their incentives are misaligned with the people who do the work that keeps them employed.

Update 2

Here we go again

Now people are educating each other about how to save important documents from being automatically trashed from OneDrive.

Employee claims she can’t use Microsoft Windows for “Religious Reasons” : Reddit/r/AskHR


And they let her! You mean, all this time, I could have requested Linux on my corporate laptop for religious reasons!? BRB. Going to HR to explain my actual, deeply-held beliefs on this…

Tax Exemption for Churches (Is the Wrong Question)

For the many-th time, I see a repost from Twitter on some other social media site, complaining about the wealth of mega-church pastors, and trying to rile people up about how churches should NOT be tax exempt. And, sure, Joel Olsteen’s lifestyle is a mockery of Jesus’ life, but there are only a handful of “mega” churches and “mega church” pastors in this country. Meanwhile, many, many thousands of the so-called 1% in this country pay a lower tax rate (and sometimes, ACTUAL tax) than the average, blue- or white-collar person does.

As a country swimming in debt, we would get a lot more mileage out of calling for meaningful taxation of billionaires and multi-hundred-millionaires before we start worrying about removing tax exemptions for churches and pastors. I think those posts and reposts on Twitter are probably jointly paid for by The Koch Brothers and George Soros, for the class-warfare angle. And maybe Bill Gates, for the anti-religion angle.

Joel Osteen pays taxes on his income. How much of it he has managed to shelter from the IRS is a game played just like all the rest of the 1%. The church, as a non-profit, does not pay taxes, because the money being received in donations cannot be considered a profit to tax. That’s the definition of how non-profit organizations work.

Churches are supposed to be prevented from getting involved in politics. It’s part of the deal in being religiously tax-exempt. (How this works when Presidents and candidates go to churches and make speeches from the pulpit is quite beyond me, but I digress.) If you start taxing churches, then there’s no reason for them not to get heavily involved in promoting particular candidates, and forming political action committees, just like corporations, taking an active role in getting people elected, and lobbying government for favorable treatment.

You may retort that large, corporate churches like the Catholics or Mormons already exert a huge influence on government, and I’d say you’re right, but it’s still less than the average Fortune 100. If we open the floodgates here… With the “war chests” accumulated by both of those organizations? As they say: you ain’t seen nothing yet.

Do the people calling for the removal of tax exemptions for churches really understand what they’re asking for? I don’t think they do.

Corporate IT “Automated Systems”

Today, I was contacted by #CorporateIT as to whether I was still using <expensive software>. I said no, and that I had tried to uninstall it, but it didn’t work. And, by the way, I’ve tried to “surrender” several other applications, so that my department is no longer billed for them, and NONE of them have worked.

So #CorporateIT guy forwards my email to <IT Director>. I’ve worked at Cummins for 10 years, and still can’t figure out the organizational structure. Anyway, he explains that their systems are great, and process 12,000 requests per month without any problems. I thank him for the considerate response, but this doesn’t change the fact that this has never worked for me, not even once.

Then I get dressed, go into the office, and try to “surrender” one VERY expensive piece of software from a machine that needs to be retired, and I get this error message. Now, I understand that these are (probably) not the same systems under the skin, but it’s the same aggravation, and I just wish that the people running these systems lived in the same IT world that the rest of us do.

Corporate IT “Support”

Suppose you have a problem on your company laptop, for which you contact #CorporateIT, using Microsoft Teams. Now, you’re already logged in as yourself, in Teams, but the first thing they always ask is to confirm that it’s… you, who is contacting them. Then they always ask whether you’re at a company site or home, and what your phone number is. Now, about 90% of everyone is working from home, and with the VPN, it wouldn’t matter anyway. Also, they have no need to call you on the phone. In the rare event they want to use voice, they’ll just use Teams!

So, given the lag of getting a person from the queue, and the normal flow of question and response, you’re about 5-10-15 minutes into the process, and you have wasted the entire time by answering three stupid questions. But if you don’t respond in about 10 seconds, you’ll be badgered with, “Are we still connected?”

After your chat, you’ll get prompted to rate support’s “help” in Teams, and then you’ll get about a dozen emails — whether or not they helped you in any way — including ANOTHER prompt to rate their helpfulness. And they are judged on this. I once rated a support tech poorly, because he was completely unhelpful, and didn’t even try to escalate the problem. He contacted me back to argue about it, and couldn’t disagree with my rating, but pleaded with me to change it, because that’s how they stay employed. I did, but I just don’t bother with the ratings any more.

AI and the Big Five – Stratechery by Ben Thompson

Mobile ended up being dominated by two incumbents: Apple and Google. That doesn’t mean it wasn’t disruptive, though: Apple’s new UI paradigm entailed not viewing the phone as a small PC, a la Microsoft; Google’s new business model paradigm entailed not viewing phones as a direct profit center for operating system sales, but rather as a moat for their advertising business.

Source: AI and the Big Five – Stratechery by Ben Thompson

I think it’s worth noting something here. Just before this paragraph is this:

The PC was disruptive to nearly all of the existing incumbents; these relatively inexpensive and low-powered devices didn’t have nearly the capability or the profit margin of mini-computers, much less mainframes. That’s why IBM was happy to outsource both the original PC’s chip and OS to Intel and Microsoft, respectively, so that they could get a product out the door and satisfy their corporate customers; PCs got faster, though, and it was Intel and Microsoft that dominated as the market dwarfed everything that came before.

It seems to me that Microsoft was guilty of the same sin as IBM when it came to mobile. IBM viewed PC’s as tiny little mainframes. Microsoft viewed “smart” phones as tiny little PC’s.

Whenever people write like this, it nags at me, that a massive, multinational corporation’s motivations could be represented by a single viewpoint, held by a single person. But then I force myself to relax, realize that the organization’s actions really do boil down to being explained like this, and commit to the simplification for narrative purposes. So, acknowledging this… What “IBM” couldn’t “see” was that, while “limited” in relation to a mainframe, the PC was capable enough to do things that mainframes couldn’t do. I’ll never forget the Aha! moment I had in my first engineering job. “I was there, Gandalf;  3000 years ago.”

I was working for a small (80-ish people) company that made air compressors. They had just gotten bought by a huge, multinational air tool conglomerate, and the former owner had spun off a tiny portion of the tiny business to a new, separate company. As part of the new owner’s investment, the company was buying new PC’s for “the office.” Five of us got new, genuine IBM, i486DX2 66 PC’s with all the goodies, including real IBM Model M buckling-spring mechanical keyboards. They were glorious.

In an old garage, next to the main building, was a pile of “stuff” leftover from the rearrangement. In that pile, I found an internal, 4800 bps modem, and a full-length “mainframe” card, for attaching to a token ring network, and emulating a terminal. I installed both into my PC, and got my boss to let me get a Prodigy account. (And discovered Doom.) The “mainframe” card allowed me to connect to the mainframe, but I don’t (and still don’t) know anything about mainframes, so I just left it there.

Then my boss asked me to do a BOM comparison between 2 similar compressor models, and pointed me at two giant, mainframe printouts on green-barred, spoked paper, in those terrible binders with the variable-length metal straps to hold them together. They were about 2 inches thick. I started to compare the paper reports for about a minute before I had a thought…

I got the lady who ran the mainframe (an IBM System/36) to make me BOM reports for both compressor models. This apparently took an entire program to be written, and it was no wonder that mainframes were already dying by 1993, but I digress. I was able to download the reports to my PC over the “mainframe” card. Of course, these reports, being simple lines of text, were only a megabyte or so, but I had eight megabytes in my fancy new PC! So I was able to import both BOM’s into Quattro Pro, and do some spreadsheet manipulation to show the differences.

This sort of simple, quick, ad-hoc query and reporting capability, enabled by spreadsheets, has been the backbone upon which all corporate business has been run for almost 30 years. A lot of company data now lives in cloud services, which have their own query and reporting tools, but my perception is that Excel is still a core tool that the majority of people in the Fortune 1000 are using to manage their workflows. Like, you could take away literally everything else but Excel and email, and you’d be fine. It would take some adjustment, of course, but the business would carry on. That’s how critical it is.

IT managers in large corporations like to think that their multi-million dollar IT systems are special, and there’s an attitude that the company couldn’t exist without them now that they’ve been implemented. Entire kingdoms are built around them in the modern, corporate, fuedal-like system present in every Fortune 1000. However, the people running these systems don’t seem to understand that there is invariably enormous activity in the company devoted to shoring up these systems with ad-hoc tools in Excel, simply because the team responsible for the system will never have the time to implement the customizations the users need to make the system truly useful for their work. At least, if they do know it, they ignore it, and they can, because they are not held accountable for the vast quantities of technical debt and wasted work because of their compromised implementation, which stopped short of all the promises upon which the system was sold to the monarchy. The true costs were never actually presented, and now that “shortage” not only gets spent, but gets duplicated all over the company, because spreadsheets do not “scale.”

I didn’t start out to make that point, but this is why I write: to “work out my thinking,” as I state in the sub-title of this blog.

Why do I know? Because if I could sum up my 27-year career, the central theme of it would be creating applications to replace terrible, shared Excel spreadsheets with — hopefully, less terrible — web and native applications, tailor-made for the workflow the spreadsheets were supporting. I can count 13, right off the top of my head, and I’m sure I’m forgetting some of the smaller ones. I’ve spent about 21 of my years in Fortune 250’s, so maybe I have a jaded view, but my feeling is that this extrapolates through all big companies, world-wide.

This is what IBM missed. People know what they need, and will use “manual” effort to get around corporate IT lethargy. At first, it was  routing around mainframes, and their impossibly slow development times. Now it’s every large, “corporate” system, like CRM or ERP or PDM, and their impossibly slow development times. The limitation of “the mainframe” wasn’t in its hardware or its development language, it was in the system of fiefdom that is the corporate budget allocation system, and the unintended consequences it produces, specifically the unaccountability inherent in the fact that the monarchs can’t understand the technical and logistical limitations in customizing a large system, and the true costs are therefore elided in the endless budget cycle. And when an aging system is deemed fit to retire and replace, the whole cycle starts all over, with corporate IT creating a system just shy of what’s really needed, and end-users creating spreadsheets to backfill the gap.

A lot of these kinds of systems — particularly HR — have been moving to the cloud. Why? In my estimation, it’s not because they’re cheaper, even on paper. It’s because those systems are fully-formed, and include all the end-user-facing querying and reporting needed to make the system useful for every requirement. Fortune 500 companies could have made a streamlined version of, say, Workday, for their specific, internal use, but corporate IT — as a standalone, ivory tower, ultimately beholden only the the CEO, who couldn’t care less — could never figure out how to work closely enough with the user community to address all of their needs. So now, users have to put up with yet-another-end-all-be-all system, designed to address the needs of every company on earth. But! At least, once they figure out the workflow to get what they need, it’s all downhill from there. Here’s the key: at least it’s possible without Excel.

More and more workflow operations will continue to expand into cloud-based services, but it’s only possible to do this with services every company needs. This is why we’re seeing a deluge of advertising for HR apps, even on TV, each designed to hit a different company size and price point. It’s not possible to do this with, say, PDM applications, so companies like mine are going to continue to be hamstrung with a systems like Integrity/Windchill. On the one hand, it’s become an important tool which must be used to get products out the door. On the other hand, it doesn’t do a whole bunch of stuff people really need it to do with the data it already has — and it never will — so there are a whole bunch of Excel spreadsheets running loose in the company that duplicate the data, waste the manual effort, and do the things that need to be done, which IT has no knowledge of, and does not care about, because it doesn’t show up as a liability against their budget. And the situation will continue, for the foreseeable future.

Don’t Get Involved with Things you Can’t Fix, and You Can’t Fix Stupid

Twenty-odd years ago, I was involved in a Product Data Management system implementation. This is just part of a much larger story, but the salient point from the epic saga is that I worked for a psychopath, and he tried hard at making my life difficult. I never figured out why. I think it was because he blamed me for something my previous boss did to his project. Anyway, we’ll get back to him later.

I was operating as a sysadmin, tasked with ingratiating the main admin from France to install an application on our servers, here in the US. At the time, corporate IT had just made it policy that no one but them could have root on machines hosted in their data center. On Unix (as opposed to Windows), I didn’t mind. That works just fine. However, the other admin had made getting root his #1 requirement. I told him of the policy. He didn’t relent. So I tried to elevate the coming train wreck with my management and everyone in corporate IT, hoping that something could be worked out before he arrived.

The guy shows up, shakes my hand, and asks me for the root password. I get on the phone with the main Unix admin. They finally relent, and allow me (because I’ve known them for 6 years by that point) to sudo to root to setup all the prerequisites.

The other admin is furious, tells us he can’t do anything until he gets root, and goes back to his hotel. Next day. Big meeting. Everyone on the phone. Group in one office, corporate IT in theirs, admin from the hotel, boss in the UK. I ask: “Michael, what specific commands do you need to run as root?” He says — get this — “You get in your car, and you turn the key, and it starts up. You don’t know how; it just works.”

In our room, we all just looked at each other in disbelief. First of all, he was talking to a bunch of mechanical engineers who happened to fall into implementing a PDM project. We all understood exactly how cars work. Second of all, everyone on the call would expect “the expert” at installing the application stack to be able to answer the question.

It was clear there was no arguing about it further, and the project had to get done so that he could shuffle off back to France, so they gave him root, and he did his thing from the hotel, and never spoke to me again.

After all the nonsense, you know what the problem was? The application server was configured to run on port 80, out of the box. That’s it! It assumed it would be running on the standard, privileged port. We could just as easily have configured it to run on port 8000, or port XYZPDQ. It didn’t matter! We had a load balancer running on port 80 in front of it. It could have been any port we wanted! Our “expert” admin couldn’t understand that, and my fearless management wouldn’t hold him accountable for such an elementary understanding of what he was doing.

In the weeks after, I realized that my boss had made me the scapegoat with upper management for the situation, because I was the one that tried to head this disaster off at the pass. Since I had sent emails, and talked about it, apparently I was the one who was causing the problem. This was just one of the many conflicts with my psychopathic boss. I had to learn a lot of hard lessons about politics over the 3 years on that project, but this one backfired in the most unexpected way.

Unfortunately, I had basically the same sort of thing happen again a few years ago. I tried to warn my management that IT was telling me something really, really stupid, and that it was going to come to a head in a spectacular way. But they couldn’t understand anything I was telling them, and trusted that IT knew better than I did. The problem is that IT didn’t want me to be working on the project. They felt they should have been the ones to “get the business” to develop it, and were actively trying to slow me down. Unfortunately, I didn’t learn what else to do in this situation except continue to try to educate the people who are looking at me like I’m crazy. Anyway, maybe I’ll blog that one 20 years from now.

Pluralistic: 21 Aug 2022 The Shitty Technology Adoption Curve Reaches Apogee – Pluralistic: Daily links from Cory Doctorow

Office 365 went from being an online version of Microsoft Office to being a bossware delivery-system. The Office 365 sales-pitch focuses on fine-grained employee tracking and comparison, so bosses can rank their workers’ performance against each other. But beyond this automated gladitorial keystroke combat, Offce 365’s analytics will tell you how your company performs against other companies.

That’s right – Microsoft will spy on your competitors and sell you access to their metrics. It’s wild, but purchasing managers who hear this pitch seem completely oblivious to the implication of this: that Microsoft will also spy on you and deliver your metrics to your competitors.

Source: Pluralistic: 21 Aug 2022 The Shitty Technology Adoption Curve Reaches Apogee – Pluralistic: Daily links from Cory Doctorow

I feel like a fool. I watch Microsoft like a hawk, and I didn’t even know about this. Every time I think I’m too cynical about a FAANG company — and Microsoft in particular — I find that I haven’t been nearly cynical enough.

With this new LinkedIn connection, in Outlook, it’s now possible for Microsoft to connect a particular person to a particular user in your current company’s “metrics.” I suppose they could use this to juice search results for recruiters in LinkedIn, or provide reports to potential employers. I wouldn’t put any of this past them.

Corporate IT, NodeJS, “Tech” Companies, and Freaking Microsoft Windows

The Scene

A few years back, as part of a long, slogging series of unfortunate events, I had been tasked with developing a new web application, which circumstances dictated should be written in Java. Books could be written about this one-year period of my career. (And not, like, inspirational ones.) Anyway, part of the process included trying to get people to realize that no one, these days, wrote web apps in Java without using one of the many, popular Javascript libraries for the front end (like React or Angular), and get my management and corporate IT to understand that I needed to install NodeJS on my machine to facilitate this. Up until this point — and despite the fact that it was obviously used by other development teams in the company — it was not on the “approved” list of software to be installed on local machines. Through several strained meetings and rounds of email, someone, somewhere, deep in the bowels of IT, corrected the obvious oversight, and put it on the list.

The production version of NodeJS was 8, at the time of approval.

This kerfuffle was but one small facet in the gem that was this job posting. In the middle development process, I jumped at another job opportunity, and left my Fortune-250 for a different Fortune 250. The IT environment was eerily similar, and led to this post about making Windows tolerable. It was this experience that got me to see the real root of what I’m complaining about here.

And then, through a short series of more unfortunate events — and one amazing event — I came back to the original Fortune 250, in a different department.

Some months later, just after getting settled back in, I got an email asking me if I would approve a new version of NodeJS to be officially blessed and uploaded to the internal repository.

A Symptom, not the Disease

Strangely, I was being asked to approve NodeJS version 9. If you’re not familiar, NodeJS uses a version numbering system like the Linux kernel used to, where even-numbered releases are for production use, and odd-numbered releases are development versions, intended only for development of the software itself. In no way should 9.x be considered for use in projects inside a blue-chip Fortune 250.

I explained this situation to a laundry-list of TO: and CC: recipients in a long email thread that had already been making rounds inside the company before someone finally saw my name attached to the original request, and added me to the chain. Of course, my explanation was ignored, but I only discovered this 6 months later, when I was being asked, again, to approve version 9. Apparently, I was preventing some developer in India from doing his work on a “high priority project” by not having approved it already, and I needed to get on the stick.

I become more blunt, at that point. First, I didn’t do whatever was done to get it certified the first time, so I didn’t know why I was being called on to do it again. Second, I tried to make a case for exempting development libraries, like NodeJS, from the slow process of getting them approved for internal use, and uploaded to our internal software delivery site. This led to another important person added to the chain, who, surprisingly, supported my argument, but, again, nothing changed.

A month later — seven months into this “discussion,” and presumably still holding up a “high priority” project with a “requirement” for 9.x — I got another email, which included a screenshot of an error from Angular, saying that it no longer supported NodeJS 8.x, and that it needed at least version 10.x or 12.x. Again, I pled with the list of people involved in the email chain that we needed to treat development libraries and applications differently than we treated, say, Office applications. I pointed out that, in the time that we had been fussing over version 9, version 14 was now shipping.

Six months after this exchange, I got an email from a desktop support technician. He was asking for clarification about details when installing… wait for it… version 8 on a developer’s computer. That’s right: After over a year of this exercise, we were still fighting to get a version that’s now a year and a half out of support installed on a developer’s machine.

And then, the situation actually got even worse. The developer’s “computer” was really a shared environment (like Citrix, et. al.), and the shared NodeJS install was being constantly re-configured between multiple developers using the same computer between projects. The support person was actually savvy enough to have suspected this, and was asking me about how it worked. I confirmed that this would, indeed, be a problem, and we figured out the flags to install it into each person’s personal directory, and keep the node_modules directory separate, per user. So, at least we figured out how to successfully install a version of Node that was dangerously out of date to a shared computer.

Actually trying to use NodeJS for the job it was created for, and downloading a stack of Javascript libraries to support Angular or React, led to another discussion of how to get it to play nicely with our corporate, Active Directory-authenticated firewall, which — naturally — blocks all access to the internet from anything that doesn’t run through the Windows TCP/IP stack. Say, like npm or yarn trying to access the NPM repository. I had figured out a workaround for that in the first few months of working at the company, and just pointed them at Corkscrew, which transparently handles the NTLM authentication for command-line utilities like npm (or Ruby’s Bundler).

The Root of the Problem: Microsoft, and Windows

If the shared computer had been Linux or Mac, none of these problems would have existed. Each account on Linux and Mac has a proper personal directory, and things like Node and Ruby assume this, and take advantage of it. Each user could install whatever he wanted to in his home directory, and not need administrative permissions on their machine, or have to rely on some internal application-distribution site. Also, if developers could use anything other than Windows, corporate IT would probably not assume that everything which gets forced through the corporate firewall can do NTLM authentication, and force people running tools like NodeJS to rely on a squirrely tool like Corkscrew. Windows has gotten a lot better over the past several years about installing things into a user’s AppData directory, and Microsoft has spent a lot effort in recent years to develop and astroturf WSL(2), Visual Studio Code, and the new Terminal, but Windows is still a second-class citizen for modern web programming.

I try to temper my frustration with this situation with the knowledge that IT departments of large companies have been forced into many, cascadingly-obtuse compromises by their use of Windows. So many frustrations in a company’s user community can be traced back to the relatively quirky, and single-user-oriented way Windows has always worked, and the monoculture that using Windows requires, thanks to Microsoft’s legacy of embrace-and-extend, especially in directory services. The size of the company exacerbates the problem. At my current company, I know of at least 5 different IT org trees. After 6 years of working with various people in these groups, I still have very little understanding who actually owns what. To be fair, most of this is felt by only a small portion of the “power user” community at a company, but that’s most of the people I deal with.

The Distortion of Scale

The biggest problem here is the scale of the operation. When you have 50,ooo nails, you make sure they’re all the same size and finish, and you use the exact same kind of hammer and technique on all of them. You’d think it would be possible to use a bit of manpower in these various IT departments to treat some of these nails differently, but the vast ecosystem required to take care of Windows just eats up all available resources. Anti-virus. VPN. Standard desktops. Scripts to prevent people from doing things they shouldn’t. Scripts to report all activity on the things they should. Office 365. One Drive. Teams. Zoom. Forced password rotations. Worldwide hardware and software upgrades. Locking out how long the screensaver takes to kick in. Preventing changing of custom login screen backgrounds. It’s a lot. I get it. Using Windows as a corporate desktop environment automatically assumes so much work, it leaves little room for treating a computer like a tool that needs to be customized for the job it needs to do, and the work it needs to support, even when those goals are, ostensibly, incidentally, also primary goals of the larger IT organization. It’s a counter-intuitive situation.

I started this post by pointing out that this stack of regrettably-predictable compromises, which result in suboptimal policies and outcomes, is primarily a problem with traditionally non-“tech” companies, but the real, underlying problem is much deeper.

The truth is that all companies are now “tech” companies, whether they realize it or not. And those that can’t change their approach to IT to adapt to this new reality — or change it fast enough to matter — will wither on the vine, and their remaining assets, eventually, will be picked up in a corporate yard sale to companies that have “tech” embedded in their DNA from birth.

I worry that a company which, 30 years later, still breaks up it’s most-important digital asset into 8 pieces because that’s what would fit on a floppy disk will not make the turn in time.

The reason I started writing all of this down was because — after all of this time and discussion — I was asked to approve NodeJS version 10 for the internal software repository. At the time I was asked, version 10 didn’t even show up on the NodeJS release page any more. They were shipping version 16. I guess 10 is better than 8, but let’s be honest: The only reason they gave up on version 8 or 9 is because the version of Angular that they’re using is refusing to work with anything pre-v10. That happened back in Angular version 8, which is now also out of support.

As part of the great email chain, I pleaded with the various people involved with the internal software approval process that keeping up with the shifting versions of your tools and supporting libraries is just part of the job of being a web app developer, yet no one even batted an eye. You would have thought that this concept would have fallen directly under the multi-headed hydra of “security,” and the company’s philosophy seemed to be you can never have too many software layers or policies about it. You would have thought they would have pounced on the concept in order to at least seem serious. I even invoked the specter of the recent, infamous log4j bug, as an example of the risks of letting things get out of date. This issue caused an audit of every Java-based application in the company, so it should have been a touchstone issue which everyone in the chain could relate to. But if anyone could understand what I was trying to say, they apparently didn’t care.

IT Best Practice vs IT Policy

I didn’t much care for The Big Bang Theory, but one scene has stuck with me for a long time. In S1E16, Sheldon is shopping in a store like Best Buy, and some woman comes up to him and asks, “Do you know anything about ‘this stuff?'” He replies, “I know… everything about ‘this stuff.'” And that’s the heck of this situation. It’s almost like every single person concerned with this process has absolutely no idea how any of “this stuff” actually works, and won’t listen to someone who does. And I realize how conceited that may sound, but, in this case, I don’t know how else to put it.

The only other explanation is simply apathy in the face of bureaucracy, and I wish senior IT management would take it on themselves to root out this sort of intransigence, and fix it. It would seem to be their job, and would go a long way to justifying a C-level salary. Unfortunately, this isn’t the first time I’ve found myself trying to explain a direct contradiction of IT best practice versus IT corporate policy to the very people who are supposed to be in charge of both, and I’d like to think I’ve learned how to convey my thoughts in a less confrontational way, but I obviously still haven’t figured out how to motivate people to rise above the internal politics and align the two, and that makes me sad.

I’m finally posting this because I just got another request to approve version 8, now three and a half years on, and I needed to vent.

¯\_(ツ)_/¯

Update 1

A couple weeks after posting this, I got CC’d on a long desktop support email chain from a developer in India who can’t get angular-cli version 7.x working with npm. <sigh> And there are four references to how urgent and how high a priority this is. A simple search shows a pretty detailed SO post about the particular error message, and the general answer seems to be to either play games with the particular versions of the dependencies, or just upgrade to a 8 or 9… three years ago. In any case, this isn’t a desktop support question. IMNSHO, this is squarely a developer’s issue. Sorry, but that’s the job, brother. Do I try, feebly, to make another point, or just let this go?

Update 2, eight months later

Because everyone got new laptops, I was looking around the internal company web page for software installation. And what do you think I happened to see? That’s right! Got it in one try! To be fair, there’s a newer version, but this version should simply not exist, anywhere, for any reason, at this point.

Still There

The Crushing Weight of Knowing What You’re Doing

“Who are you and why are you here?” –Dave Cutler (DaveC)

Source: 012. I Shipped, Therefore I Am

Steven Sinofsky, once a huge wheel at Microsoft, for a very long time, is writing a series of articles chronicling the halcyon days of the early PC business at Substack. I can’t quite bring myself to subscribe, because most of it is free already. Plus, there aren’t many surprises for me, since I was living it during that time.

When Windows NT was introduced, I was quick to jump on board. I was already experimenting with Linux towards the end of ’94. But then I saw a disc of NT 3.5 (not even 3.51 yet) on someone’s bookshelf. He said he wasn’t using it, so I snapped it up and installed it. For the next 20 years, I would dual boot my PC’s between Windows NT and Linux. I only used Windows for gaming, but for that use, it was obstinate. I tried every incarnation of Wine and Crossover and PlayOnLinux and everything else. Nothing ever let me run Windows games on Linux well enough to warrant getting rid of a native partition.

The content of the slide above is of no consequence, as is pretty much the case with all presentation slides. What’s interesting to me is the little toolbar on the top, left side. It’s from the early Office XP days, back when Microsoft was new and cool. “Before the dark times. Before the empire.” Seeing it evoked a visceral response. As a computer nerd, those really were interesting and exciting times to live through. From the article, that screencap is from 1992. Competing against giants like IBM, HP, and Sun, Microsoft’s eventual dominance was anything but sure at that time. And that’s what’s prompted me to write this anecdote.

In 1995, my Fortune 250 company didn’t even have an internet connection yet. I was using a phone line, and a modem that I conned my boss into letting me get. It was over this modem that I downloaded all 54 floppy drive images of Slackware Linux, on a computer running Windows 3.11 with Trumpet Winsock, connecting to a free SLIP dialup bank in California.

At first, I was much more into NT than Linux. I skipped Windows 95 entirely. I don’t think I ever had a computer that ran it.

I remember how easy it was to setup a dialup connection in NT. By 1996, I was running a dual Pentium Pro with 384 MB of RAM, SCSI hard drives, and a $2,500 video card to do FEA work. The total cost was about $10,000. A coworker got a SGI Indy to do the same sort of work, to the tune of $80,000. The company still didn’t have an internet connection, so he also got an external modem, and hired a local ISP to come set it up. The guy came and screwed around with the connection for 4 hours. I kind of razzed him, by pointing out that it took me all of 15 minutes to configure the same thing on NT. That’s how smug I was about NT versus Unix at the time.

The best part was still to come.

For the next week, the ISP guy still couldn’t get that Indy on the internet. Every time it would connect, the kernel would segfault, and the machine would crash.

But that’s not the best part.

The ISP guy worked with SGI to patch IRIX to fix the modem driver, and finally got it working. My coworker left it connected to the internet all the time to get his email. Things worked fine for a few weeks.

Then the company got a T1 internet connection, and then connected our facility to the main office via a SONET ring. I was really looking forward to not needing my dialup connection any more. But, the first morning, no one could access the internet. Complaints were made. Investigations were performed. Our internal IT would fix the problem, and the next day, it would come back.

Here comes the best part.

Finally, someone realized that computers inside our facility were getting the wrong gateway address to get to the internet. They realized that they were picking up the IP address of my workmate’s Indy, which was advertising itself as a route to the internet, and since the number of hops from computers in the office to the Indy were less than skipping over to the central office, they were preferring its modem, and the Indy’s phone line would choke from the load.

I recall very clearly that there was a simple checkbox in the dialog for setting up a dialup connection in Windows NT for advertising the connection to the LAN as a route to wherever you were connecting. It was on by default, but when I was running through the process, I quickly realized that this was NOT what I wanted, and un-ticked it.

And I felt pretty smug about being serious about NT at the time.

I stuck with NT as my primary interest until some time around 1998 or so. Then Nat Friedman and Miguel de Icaza released Ximian Desktop for Linux, which made Linux on the desktop really pleasant to use. I wasn’t doing analysis work any more. I had transferred to become the system admin of all the Unix machines in the advanced engineering group, so running Linux was a perfect fit. After that, it was pretty much all Linux, all the time, until switching to Macs just a few years ago.