Another Day, Another Boneheaded Move by #CorporateIT

I’ve been having mysterious problems with both of my corporate computers. Things that used to run only sort of run now. Today, I finally figured out that this is happening because #CorporateIT, in its ineffable wisdom, has decided to suddenly start automatically deleting any customizations to either the system or the account PATH variable by way of login (or logoff, or startup, or shutdown) scripts.

Years ago, Arvin was a lovely company with lovely people. Then it was sold out from under us, and eaten alive by Meritor (which has now been eaten by Cummins). They made a big show of bringing in some bonehead whose job was to setup “proper” IT policies. I watched in horror as he obviously just slapped together a bunch of white papers he rummaged through the internet to find, copy-and-pasted them into “controlled” Word docs with company logos in the header, and presented them as a legitimate security posture, despite obvious problems and glaring inconsistencies. Unintimidated, I took him to task about it. We went a couple of rounds, which ended with him literally screaming at me over the phone. I finally got the attention of one of the senior IT directors, and got a chance to vent about the situation.

One of the things I complained about was the removal of cron from all Unix machines, which I (as a Unix admin, at the time) was making liberal use of. First, cron doesn’t allow you to do anything you couldn’t normally, otherwise, do, so why remove the convenience? Second, if running things out of hours or on a schedule is a Bad Thing (TM), then why weren’t we also removing Task Scheduler from all Windows machines? Third, if this is about a security vulnerability in the binary, then just make sure you’re keeping up to date with patches from the vendor, just like everything else.

The director then told me that that particular policy provision was actually written by her, as though this was supposed to make me suddenly backtrack, and withdraw my objection. I asked her why, and all she could do was say that this was considered an “industry best practice.” Yeah, but why!? The bottom line is that this was an unintended consequence of SOX. It’s just a thing that’s easy to suggest by consultants, easy to do by IT staff, and easy to verify, and makes a nice bullet point on a validation study about IT policies. Job done! Give IBM $100K to rubber stamp our SOX compliance report! But it does literally nothing to “secure” anything. All it can do is inconvenience users.

If there’s an actual security flaw in the cron deamon itself, then get it patched! There’s no reason to eliminate it entirely. At least, it’s not worth the inconvenience of uninstalling it on the slight chance that a new vulnerability might be found in it, and get exploited by a bad actor, before it can be patched.

This is a hill I will die on.

I got my cron back.

Today’s issue with #CorporateIT is the same. Now I can’t run rails or rake or git at the command line unless I fully “path” them. This is what has been breaking my scripts. And I know they’re nuking both system and user PATH variables, because I tried the second after noticing that the first was being blown away. Why in the world are we deleting customizations to the PATH variable? On what planet does this make anything more secure? What malware wouldn’t try all known paths, regardless of the PATH setting, or fully path its own executables? How can this do anything but make people’s lives less convenient? It’s still possible to set, of course, so I guess I’ll write a .BAT script to run when I want to start working, which will update my user PATH variable so I can just get on with it.

Wow. We’ve really locked down the configuration, huh, guys? The bad guys have no chance now!

To me, the implementation of any security measure depends on the answers to some fundamental questions: What’s the vulnerability? How large is the risk? What’s at stake? What is the mitigation? Is the preventative fix worth the cost in terms of money, access, and productivity? What’s the data we are protecting worth, such that it makes sense to implement the policy? I understand there’s a lot of subjectivity here, but these questions will separate the wheat from the chaff really quickly.

For instance, the staggering mountain of PowerPoint presentations that no one having a meeting can seem to do without, sitting on the corporate file server, mean nothing to anyone outside of the people who are having meetings about it, and even then, only for the week they’re having the meeting. Does it make sense to install every security product on the market to protect this “information?” Not in a million years. Even Office documents you think are profoundly important are hard to dig up out of your collection after a little while, and hard to make sense of once you do. How would any of this “data” be strung together in any useful way by bad actors? For all of the hand wringing about it, the shared drives could be open to the public, for all the risk to the company it actually exposes.

I have another story about this, but I’ll save it for another time.

Every time we turn around, IT has implemented a new policy, a new layer, a new product that’s supposed make our “data” “more” “secure,” and each time it happens, we lose the ability to do something useful. #CorporateIT dictates that our Teams chat histories vanish after just 24 hours. In a company which requires a month for anything to get done, and often requires multiple tries, it would be nice to be able to refer to that log for a month, no? Does no one in the company see this? What sort of crack-addled meeting was held between legal and IT to come up with this? Deleted email disappears after 30 days. If you want to save it to refer to later, you need to remember to hit the “archive” button. Again, when things take months to happen… But sure, blame it on litigation

The really stupid part of this? These moves won’t save you legally. People involved what whatever is being discovered will be called to testify, under oath, what they said, regardless of records that attest to it. So this does nothing to prevent legal culpability. It’s just another hassle for end users in the name of a tick box on an auditor’s checklist.

Every week, there’s a new thing to justify a budget. Every week, it’s a new, unannounced loss of capability. I’m really getting tired of it.

Update

About a week after I wrote this, a coworker sent out an email to our entire group, saying that hundreds of thousands of documents we still rely on had been automatically deleted from our Sharepoint files and Teams channels. He said that they have restored these things, and he was working with IT to make the auto-delete policy kick in at 10 years, instead of the current 3. This is exactly what I’m talking about when I say that, if a company moves at a pace where even the simplest things take a month or three to do, then we need chat history to last at least this long. Our projects are sometimes decades long. We need our stuff for at least that long.

This is a perfect example of IT setting “security” policy without asking the basic questions above, and living in a fantasy world where they are free to believe that their consultant-and-whitepaper-suggested rules don’t have costs. At least my coworker didn’t throw up his hands, and say (basically), “You can’t fight city hall!” He took them to task, and now they’ve had to realize, in at least this one case — for, again, no actual legal benefit — the utter hassle they incur when their incentives are misaligned with the people who do the work that keeps them employed.

Update 2

Here we go again

Now people are educating each other about how to save important documents from being automatically trashed from OneDrive.

UNIX co-creator Ken Thompson is… a what user? • The Register

Elder statesman of system software makes a shocking revelation:

Thompson replies:

I have for most of my life – because I was sort of born into it – run Apple. Now recently, meaning within the last five years, I’ve become more and more and more depressed… And what Apple is doing to something that should allow you to work is just atrocious… But they are taking a lot of space and time to do it, so it’s okay. And I’ve come, within the last month or two, to say: even though I’ve invested a zillion years in Apple, I’m throwing it away, and I’m going to Linux. To Raspbian, in particular.

Source: UNIX co-creator Ken Thompson is… a what user? • The Register

This article is a fantastic summary of the public highlights of this living legend of computer science. I, too, fear that Apple is transforming their general purpose macOS computers into walled-garden computing appliances like iPhones and iPads. I have lamented the switch to locked-down bootloaders, but… dang if it doesn’t basically prevent theft of Apple devices (almost) outright, whatever the security and privacy considerations.

I, too, will switch to Linux, if that day ever arrives. I suspect a lot of people will do the same, particularly the cohort of developers that does not use macOS to write iOS software. When last I left Linux, I would still have given it the edge in web application development, and non-iOS/non-Widows development in general. The problem now, of course, is that my entire life is now contained within my Apple ID. That’s how they get you, and they know it.

This all makes me want to try some current version of Linux now, and see how much of my workflow I could do on it, and what I would lose. Unfortunately, the bottom line is how well a MacBook works with its own hardware, especially things like power saving and dealing with the lid and external monitors, and how it works with all of the other devices: phone, tablet, watch, video device, “pod,” tags, and especially iMessage. This alone “covers a multitude of sins,” but Apple should know that the integration benefits have limits, and chief among them is the ability to do our information technology jobs the way we want to, with the applications and environments we find best. Take those choices away from us, and it will be a line that we cannot cross.

Can GPT-4 *Actually* Write Code? – by Tyler Glaiel

I test GPT 4’s code-writing capabilities with some actual real world problems.

Source: Can GPT-4 *Actually* Write Code? – by Tyler Glaiel

Can these new large language models really replace software engineering? GPT is showing that it can write trivial code, with well-defined inputs and outputs, but I work on very complicated applications, and the trouble is specifying the problem we’re trying to solve. I’ve thought a bit about trying this exercise with my own software, that is, telling GPT the general issue we’re trying to address, and seeing what it comes up with. The difficulty is that it took months for me to understand the depth of what is going on, so it would be very hard for me to boil it down to a prompt.

I was at Purdue University, studying mechanical engineering, in the late 80’s. An electrical engineering friend had gotten an internship at a Fortune-100 company. I marveled, but he explained that, as a “new guy” at a monstrous company, you would spend you time… oh, I don’t know… designing a very specific screw until you work your way up the ladder for a decade or two.

From the start of my career, I fell into writing software for fellow engineers in manufacturing companies, and I’ve been a full-stack guy, inventing new things, for about 27 years now. It’s been very intellectually rewarding. Unfortunately, I make a lot less than I could make in a coastal city, working for a non-manufacturing, “internet”-type company. My total career compensation is likely staggeringly smaller than than it could have been.

But when I think about chucking this approach, and trying to leverage my experience to get a job at an “software” company, I go back to my buddy’s comment from 30 years ago. What is intellectual satisfaction worth to you? To me, it works out to being worth literally millions of dollars in career earnings, I guess.

As a picture-perfect example of being able to do big, novel ideas in software, I find myself in a unique position to try to make my own model to do, essentially, what a lot of the engineers at my company do. I have the data. I have the freedom to spin up whatever infra I need in the cloud. I have the ability and the time to learn machine learning, which I’ve already started. If it works, some managers will love me, and a lot of engineers will hate me. It’s basically the story of my career, just writ larger this time. Yay.

Programming vs. Achievement Hunting

Last night, in my continuing saga of playing Fallout 76, I finished all the main quest lines, and turned my attention to one of the first side quests that you’ll run into when starting the game. You meet a robot who is a Fallout version of a Boy Scout leader which starts a mission to become a “tadpole” scout. Turns out that this “mission” is really composed of about 9 parts, each of which has about 7-10 other parts, and you probably won’t even notice that the game adds a tracker for all of these steps in an obscure place without telling you, leaving you to wonder how to accomplish these things.

Many of the steps require things you’ll need to acquire that I still don’t have at level 190. At least one requires an item that is a rare drop in an infrequent event which I can’t solo. So there’s that. For reference, you unlock the 5th legendary perk slot of only 6 at level 200. So, even by the game’s standards, it would seem I’m fairly well along the path, yet I have a long way to go to finish something that started when I was in single-digit levels.

The point of this exercise is to acquire a better backpack. Like other MMO’s, you’ll be spending about half your time in inventory management, in some form or another. After leveling up, getting a few key perks, and grinding for some critical upgrades to your gear, an extra 45 pounds of carrying capacity goes much, much further than it normally would, so this is a really nice thing to try to obtain. The good news is that you only have to complete 3 of the initial badges to obtain it, and they can be any you feel like fooling with, but there are only about 5 that you can do without being, well, apparently a much higher level than me.

The “bad” news — or, the expected news, given that we’re talking about an MMO — is that completing three “tadpole” badges unlocks a whole new series of achievements in order to obtain “possum” badges. About 19 of them. All with 8-12 steps each. Many of which require… you guessed it… things you’ll need to acquire that I still don’t have, and have no idea how long they will take to obtain.

One thing that has become clear is that it’s time to launch a nuke. There are about 3 achievements that relate to it. I saw someone else comment on a forum that they didn’t do it till level 200. I get it now. I tried it once, and realized what a slog it is, and quickly set it aside. The mission continuously generates enemies until you traipse back and forth around the level and find the thing and unlock the other thing and finally enter a code. Normally, you would have to get the code by killing special enemies in the overworld and collecting the parts, but, thankfully, these codes are game-wide for a particular time period, and people figure them out and put them on a web site. Soloing this mission will require extreme sneaking to just avoid as many bad guys as possible, and I’ve got the perks and the Stealth Boys to try it now. I just wish I could stumble on a team that had some level-1,000 guy who was doing it to start the Scorchbeast Queen encounter, and just get the achievement by osmosis. But so far, no good.

So the net-net of all of this is that I’m trying to tick off about 300 different to-do’s off my list, in as efficient a manner as possible, to speed things up. You know… Do this while on the way to do that while using this and eating that and picking up these things to craft these other things… You get the idea.

A surprising amount of this activity is taken up with taking pictures of various creatures with the in-game camera. (As opposed to using the game’s photo mode for other achievements.) What I’ve noticed is that taking a picture of some animals now counts for multiple achievements, between the various “badges,” and the game’s “overworld” baseline achievements, which means I’m probably going to just walk around parts of the map where I can run into a bunch of particular kinds of creatures to photograph in one area. Oh, and be on the lookout for rare plants and mining deposits exclusive to that region.

Anyway, the point of writing this down is to note how similar this exercise feels when I’ve finished a major sub-project in my professional life, and start looking over my backlog in Pivotal Tracker, and trying to prioritize my next tasks. I realize that I’m looking over the list for ways to combine activities and push the lowest-hanging fruit to the top of the queue. And, suddenly, it dawns on me why, despite so many frustrations, I’m still drawn to MMO’s, and, at the same time, why they often feel like work to me.

Caring about Costs is Cool

But costs aren’t just about the bottomline, they’re also a measure of efficiency. I have a distinct distaste for waste. Money spent on the frivolous or the ill-considered is money that can’t be spent elsewhere. Like an engine drinking too much oil just to run. Tight tolerances (but not too tight!) are a joy in themselves.

Source: Caring about costs is cool

I’ve been a fanboy of DHH for many, many years. Yes, he created Ruby on Rails, which I’m still enamored with, 14 years later, so we have that in common, and as someone who’s made a living using it for the past 10 years, that’s a big deal. However, he’s one of only a couple of people “on the internet” with which I agree with on almost everything, and I’ve never really understood why until this post.

Of course I’ve known that he was an F1 driver, but you can probably drive those cars without understanding the engineering concept of correct tolerances in an engine. That he intuits this premise deeply enough to draw this analogy is the key I was missing to understand my fascination with him.

I may be a programmer (and system administrator, and network engineer, and database architect), but I’m a mechanical engineer at heart. It’s how my mind works. I see how things are related and interconnected. I tell everyone I work with the same thing: I’m awesome at seeing the trees, but pretty bad at seeing the forrest. I’ll give you options; you make the decisions.

Being a physical engineer, whether mechanical or civil or electrical or aeronautical or nuclear, isn’t just a vocation; it’s a way of thinking about the world and how it works. In this way, I think our thinking lines up really well, and I think that leads to thinking basically the same way about most everything else.

It’s a theory, anyway.

Mastodon

Dark Mode

mastodon (this link opens in a new window) by mastodon (this link opens in a new window)

Your self-hosted, globally interconnected microblogging community

So I’m just now realizing that Mastodon is a Rails 6.1 application. I just looked over the Gemfile, and it includes a lot of the usual gems, notably cocoon, right at the end. I have a love/hate relationship with this particular gem.

I love how it solves the problem it addresses. It’s an ingenious solution, and a clever implementation. Also, its author is also very supportive, and has done a lot of work to document it well and answer questions, on GitHub and StackOverflow. I dislike the fact that the form markup sort-of-has-to-be so fiddly for non-trivial cases, but I accept that tradeoff for preventing round trips to the server for interactions with subforms.

What I hate is that Rails has never introduced a feature to do what this gem does. I get it, but I hate it. Really, I guess the only way to prevent a round trip is to make this sort of self-HTML-form-markup-generating code in Javascript, and Rails isn’t about Javascript. In that respect, I actually appreciate that the team has NOT tried to include this approach out of the box. I just wish there were a way to have my cake and eat it too.

Hopefully, my next app will be Rails 7, and free of jQuery, not just by default, but also on principle. Unfortunately, this means I won’t be able to use cocoon, but maybe someone will remove the jQuery requirement by then. Maybe I should do it.

Also, maybe this is finally the impetus that will get me to try Mastodon.

Don’t Get Involved with Things you Can’t Fix, and You Can’t Fix Stupid

Twenty-odd years ago, I was involved in a Product Data Management system implementation. This is just part of a much larger story, but the salient point from the epic saga is that I worked for a psychopath, and he tried hard at making my life difficult. I never figured out why. I think it was because he blamed me for something my previous boss did to his project. Anyway, we’ll get back to him later.

I was operating as a sysadmin, tasked with ingratiating the main admin from France to install an application on our servers, here in the US. At the time, corporate IT had just made it policy that no one but them could have root on machines hosted in their data center. On Unix (as opposed to Windows), I didn’t mind. That works just fine. However, the other admin had made getting root his #1 requirement. I told him of the policy. He didn’t relent. So I tried to elevate the coming train wreck with my management and everyone in corporate IT, hoping that something could be worked out before he arrived.

The guy shows up, shakes my hand, and asks me for the root password. I get on the phone with the main Unix admin. They finally relent, and allow me (because I’ve known them for 6 years by that point) to sudo to root to setup all the prerequisites.

The other admin is furious, tells us he can’t do anything until he gets root, and goes back to his hotel. Next day. Big meeting. Everyone on the phone. Group in one office, corporate IT in theirs, admin from the hotel, boss in the UK. I ask: “Michael, what specific commands do you need to run as root?” He says — get this — “You get in your car, and you turn the key, and it starts up. You don’t know how; it just works.”

In our room, we all just looked at each other in disbelief. First of all, he was talking to a bunch of mechanical engineers who happened to fall into implementing a PDM project. We all understood exactly how cars work. Second of all, everyone on the call would expect “the expert” at installing the application stack to be able to answer the question.

It was clear there was no arguing about it further, and the project had to get done so that he could shuffle off back to France, so they gave him root, and he did his thing from the hotel, and never spoke to me again.

After all the nonsense, you know what the problem was? The application server was configured to run on port 80, out of the box. That’s it! It assumed it would be running on the standard, privileged port. We could just as easily have configured it to run on port 8000, or port XYZPDQ. It didn’t matter! We had a load balancer running on port 80 in front of it. It could have been any port we wanted! Our “expert” admin couldn’t understand that, and my fearless management wouldn’t hold him accountable for such an elementary understanding of what he was doing.

In the weeks after, I realized that my boss had made me the scapegoat with upper management for the situation, because I was the one that tried to head this disaster off at the pass. Since I had sent emails, and talked about it, apparently I was the one who was causing the problem. This was just one of the many conflicts with my psychopathic boss. I had to learn a lot of hard lessons about politics over the 3 years on that project, but this one backfired in the most unexpected way.

Unfortunately, I had basically the same sort of thing happen again a few years ago. I tried to warn my management that IT was telling me something really, really stupid, and that it was going to come to a head in a spectacular way. But they couldn’t understand anything I was telling them, and trusted that IT knew better than I did. The problem is that IT didn’t want me to be working on the project. They felt they should have been the ones to “get the business” to develop it, and were actively trying to slow me down. Unfortunately, I didn’t learn what else to do in this situation except continue to try to educate the people who are looking at me like I’m crazy. Anyway, maybe I’ll blog that one 20 years from now.

37signals Dev — Vanilla Rails is plenty

In our example, there are no fat models in charge of doing too many things. Recording::Incineration or Recording::Copier are cohesive classes that do one thing. Recording::Copyable adds a high-level #copy_to method to Recording’s public API and keeps the related code and data definitions separated from other Recording responsibilities. Also, notice how this is just good old object orientation with Ruby: inheritance, object composition, and a simple design pattern.

Source: 37signals Dev — Vanilla Rails is plenty

This is an “implementation” of my guiding philosophy of programming:

If you truly understand the process you’re trying to implement, the code will “fall out.”

This article is discussing adding a Rails concern for making ActiveRecord objects copyable and “incineratable,” and then implementing these operations in PORO models. That’s great, but this sort of redirection is only needed to commonize the human-to-machine naming that might be used for different classes in the application. (There’s probably a term for this, but conceptualizing the terminology used in classes and methods is an art unto itself.)

I don’t think I’ve ever written a concern, but, then, I’ve never written a Rails application (out of at least a dozen and a half now), with 500 classes, which would inevitably have some overlap in their “business” functionality. My current app is the most complex thus far, and it only has 52.

If you don’t have that situation, you don’t need this level of abstraction, and, and — and here’s the important part — if you do have that situation, you will find yourself starting to write duplicated code. When this happens, as a programmer, your “spidey sense” should start tingling, and telling you there’s another level of abstraction to implement.

And that’s what I mean about the code “falling out” of implementing the actual process of what you’re trying to program.

I suppose there’s a case to be made here that you might wind up with duplicated code on a large codebase, simply because one programmer didn’t know what another programmer had done, but these kinds of things will happen. Refactoring the duplication, once discovered, is just part of the job.

The Case for C# and .NET. It has been interesting as I’ve shifted… | by Charles Chen | ITNEXT

It has been interesting as I’ve shifted out of .NET ecosystem which I’ve worked with on the server side (and some stints of desktop…

Source: The Case for C# and .NET. It has been interesting as I’ve shifted… | by Charles Chen | ITNEXT

There are a couple of takeaways from this article. He talks about it from the perspective of a fan of .NET. I see strong points in favor of Rails as well.

First and foremost, I want to talk about speed. As a fan of Rails, I hate it when critics bring up the speed of Ruby, because I have to acknowledge that there is a definite, unavoidable penalty there. And why shouldn’t there be? It’s the interpreted nature of Ruby that makes ActiveRecord in Rails so dang flexible and easy to use. But I came to using Rails after about 10 years of using PHP, and it was painful to compare page load speeds in apps I rewrote from PHP to Rails. However, the relative productivity of the Rails stack made it a no-brainer over PHP for me.

In this article, the author compares some particular benchmark amongst various languages typically used for web application development. Here, he’s pointing out how slow Javascript is compared to .NET. But what I want to point out is that Ruby’s performance is often discussed in the context of using Java, and there’s basically no difference.

Further speed point here: https://benhoyt.com/writings/count-words/

Another thing to point out is the package mess. From the top graph, above, you can see the explosion of dependencies in the Javascript stack. Comparatively, it dwarfs everything else. Combine that with this graph, below, and the situation gets even worse. Sure, by this, you can see that .NET stack wins this race, but it’s also interesting to me that Rails clearly comes in second, especially when you also consider that it has zero critical vulnerabilities.

Over and over, Ruby and Rails gets dissed, these days, as somehow being unuseful, for a variety of reasons. I find those reasons specious. Over and over, when you dig into the rationale behind those reasons, you find out the situation is better than people give it credit for being. Rails continues to be a strong contender in the web application development world. Lots of big players continue to use it, despite how critical the HN crowd is about it. Even if it weren’t suited for those big, commercial web platforms, it would still continue to dominate in writing small, focused, line-of-business CRUD apps, and I continue to find it amazingly powerful to work with.

If I were to criticize the Rails stack, my first point of contention would be the Turbolinks thing. I’ve been sort of forced into using Ag-Grid as a drop-in Javascript data table widget, and, despite a lot of effort, I can’t find a way to make it play nice with Turbolinks.

The Problematic Black Box Nature of Neural Networks and Deep Learning – Brightwork Research & Analysis

Neural networks and deep learning are normally black box systems. This black box nature of neural networks leads to problems that tend to be underemphasized in the rush to promote these systems.

Source: The Problematic Black Box Nature of Neural Networks and Deep Learning – Brightwork Research & Analysis

I find this article absurd. If I were to create a neural network, the very second thing I would program into it would be the capability for it to log WHY it did the thing I programmed it to do. Are you really telling me that the tools available to us right now are incapable of this?