AI and the Big Five – Stratechery by Ben Thompson

Mobile ended up being dominated by two incumbents: Apple and Google. That doesn’t mean it wasn’t disruptive, though: Apple’s new UI paradigm entailed not viewing the phone as a small PC, a la Microsoft; Google’s new business model paradigm entailed not viewing phones as a direct profit center for operating system sales, but rather as a moat for their advertising business.

Source: AI and the Big Five – Stratechery by Ben Thompson

I think it’s worth noting something here. Just before this paragraph is this:

The PC was disruptive to nearly all of the existing incumbents; these relatively inexpensive and low-powered devices didn’t have nearly the capability or the profit margin of mini-computers, much less mainframes. That’s why IBM was happy to outsource both the original PC’s chip and OS to Intel and Microsoft, respectively, so that they could get a product out the door and satisfy their corporate customers; PCs got faster, though, and it was Intel and Microsoft that dominated as the market dwarfed everything that came before.

It seems to me that Microsoft was guilty of the same sin as IBM when it came to mobile. IBM viewed PC’s as tiny little mainframes. Microsoft viewed “smart” phones as tiny little PC’s.

Whenever people write like this, it nags at me, that a massive, multinational corporation’s motivations could be represented by a single viewpoint, held by a single person. But then I force myself to relax, realize that the organization’s actions really do boil down to being explained like this, and commit to the simplification for narrative purposes. So, acknowledging this… What “IBM” couldn’t “see” was that, while “limited” in relation to a mainframe, the PC was capable enough to do things that mainframes couldn’t do. I’ll never forget the Aha! moment I had in my first engineering job. “I was there, Gandalf;  3000 years ago.”

I was working for a small (80-ish people) company that made air compressors. They had just gotten bought by a huge, multinational air tool conglomerate, and the former owner had spun off a tiny portion of the tiny business to a new, separate company. As part of the new owner’s investment, the company was buying new PC’s for “the office.” Five of us got new, genuine IBM, i486DX2 66 PC’s with all the goodies, including real IBM Model M buckling-spring mechanical keyboards. They were glorious.

In an old garage, next to the main building, was a pile of “stuff” leftover from the rearrangement. In that pile, I found an internal, 4800 bps modem, and a full-length “mainframe” card, for attaching to a token ring network, and emulating a terminal. I installed both into my PC, and got my boss to let me get a Prodigy account. (And discovered Doom.) The “mainframe” card allowed me to connect to the mainframe, but I don’t (and still don’t) know anything about mainframes, so I just left it there.

Then my boss asked me to do a BOM comparison between 2 similar compressor models, and pointed me at two giant, mainframe printouts on green-barred, spoked paper, in those terrible binders with the variable-length metal straps to hold them together. They were about 2 inches thick. I started to compare the paper reports for about a minute before I had a thought…

I got the lady who ran the mainframe (an IBM System/36) to make me BOM reports for both compressor models. This apparently took an entire program to be written, and it was no wonder that mainframes were already dying by 1993, but I digress. I was able to download the reports to my PC over the “mainframe” card. Of course, these reports, being simple lines of text, were only a megabyte or so, but I had eight megabytes in my fancy new PC! So I was able to import both BOM’s into Quattro Pro, and do some spreadsheet manipulation to show the differences.

This sort of simple, quick, ad-hoc query and reporting capability, enabled by spreadsheets, has been the backbone upon which all corporate business has been run for almost 30 years. A lot of company data now lives in cloud services, which have their own query and reporting tools, but my perception is that Excel is still a core tool that the majority of people in the Fortune 1000 are using to manage their workflows. Like, you could take away literally everything else but Excel and email, and you’d be fine. It would take some adjustment, of course, but the business would carry on. That’s how critical it is.

IT managers in large corporations like to think that their multi-million dollar IT systems are special, and there’s an attitude that the company couldn’t exist without them now that they’ve been implemented. Entire kingdoms are built around them in the modern, corporate, fuedal-like system present in every Fortune 1000. However, the people running these systems don’t seem to understand that there is invariably enormous activity in the company devoted to shoring up these systems with ad-hoc tools in Excel, simply because the team responsible for the system will never have the time to implement the customizations the users need to make the system truly useful for their work. At least, if they do know it, they ignore it, and they can, because they are not held accountable for the vast quantities of technical debt and wasted work because of their compromised implementation, which stopped short of all the promises upon which the system was sold to the monarchy. The true costs were never actually presented, and now that “shortage” not only gets spent, but gets duplicated all over the company, because spreadsheets do not “scale.”

I didn’t start out to make that point, but this is why I write: to “work out my thinking,” as I state in the sub-title of this blog.

Why do I know? Because if I could sum up my 27-year career, the central theme of it would be creating applications to replace terrible, shared Excel spreadsheets with — hopefully, less terrible — web and native applications, tailor-made for the workflow the spreadsheets were supporting. I can count 13, right off the top of my head, and I’m sure I’m forgetting some of the smaller ones. I’ve spent about 21 of my years in Fortune 250’s, so maybe I have a jaded view, but my feeling is that this extrapolates through all big companies, world-wide.

This is what IBM missed. People know what they need, and will use “manual” effort to get around corporate IT lethargy. At first, it was  routing around mainframes, and their impossibly slow development times. Now it’s every large, “corporate” system, like CRM or ERP or PDM, and their impossibly slow development times. The limitation of “the mainframe” wasn’t in its hardware or its development language, it was in the system of fiefdom that is the corporate budget allocation system, and the unintended consequences it produces, specifically the unaccountability inherent in the fact that the monarchs can’t understand the technical and logistical limitations in customizing a large system, and the true costs are therefore elided in the endless budget cycle. And when an aging system is deemed fit to retire and replace, the whole cycle starts all over, with corporate IT creating a system just shy of what’s really needed, and end-users creating spreadsheets to backfill the gap.

A lot of these kinds of systems — particularly HR — have been moving to the cloud. Why? In my estimation, it’s not because they’re cheaper, even on paper. It’s because those systems are fully-formed, and include all the end-user-facing querying and reporting needed to make the system useful for every requirement. Fortune 500 companies could have made a streamlined version of, say, Workday, for their specific, internal use, but corporate IT — as a standalone, ivory tower, ultimately beholden only the the CEO, who couldn’t care less — could never figure out how to work closely enough with the user community to address all of their needs. So now, users have to put up with yet-another-end-all-be-all system, designed to address the needs of every company on earth. But! At least, once they figure out the workflow to get what they need, it’s all downhill from there. Here’s the key: at least it’s possible without Excel.

More and more workflow operations will continue to expand into cloud-based services, but it’s only possible to do this with services every company needs. This is why we’re seeing a deluge of advertising for HR apps, even on TV, each designed to hit a different company size and price point. It’s not possible to do this with, say, PDM applications, so companies like mine are going to continue to be hamstrung with a systems like Integrity/Windchill. On the one hand, it’s become an important tool which must be used to get products out the door. On the other hand, it doesn’t do a whole bunch of stuff people really need it to do with the data it already has — and it never will — so there are a whole bunch of Excel spreadsheets running loose in the company that duplicate the data, waste the manual effort, and do the things that need to be done, which IT has no knowledge of, and does not care about, because it doesn’t show up as a liability against their budget. And the situation will continue, for the foreseeable future.

Mastodon

Dark Mode

mastodon (this link opens in a new window) by mastodon (this link opens in a new window)

Your self-hosted, globally interconnected microblogging community

So I’m just now realizing that Mastodon is a Rails 6.1 application. I just looked over the Gemfile, and it includes a lot of the usual gems, notably cocoon, right at the end. I have a love/hate relationship with this particular gem.

I love how it solves the problem it addresses. It’s an ingenious solution, and a clever implementation. Also, its author is also very supportive, and has done a lot of work to document it well and answer questions, on GitHub and StackOverflow. I dislike the fact that the form markup sort-of-has-to-be so fiddly for non-trivial cases, but I accept that tradeoff for preventing round trips to the server for interactions with subforms.

What I hate is that Rails has never introduced a feature to do what this gem does. I get it, but I hate it. Really, I guess the only way to prevent a round trip is to make this sort of self-HTML-form-markup-generating code in Javascript, and Rails isn’t about Javascript. In that respect, I actually appreciate that the team has NOT tried to include this approach out of the box. I just wish there were a way to have my cake and eat it too.

Hopefully, my next app will be Rails 7, and free of jQuery, not just by default, but also on principle. Unfortunately, this means I won’t be able to use cocoon, but maybe someone will remove the jQuery requirement by then. Maybe I should do it.

Also, maybe this is finally the impetus that will get me to try Mastodon.

Why Not Mars (Idle Words)

Somehow we’ve embarked on the biggest project in history even though it has no articulable purpose, offers no benefits, and will cost taxpayers more than a good-sized war. Even the builders of the Great Pyramid at Giza could at least explain what it was for. And yet this project has sailed through an otherwise gridlocked system with the effortlessness of a Pentagon budget. Presidents of both parties now make landing on Mars an official goal of US space policy. Even billionaires who made their fortune automating labor on Earth agree that Mars must be artisanally explored by hand.

The whole thing is getting weird.

Source: Why Not Mars (Idle Words)

It is my contention that the first space program was cover for developing rockets and guidance systems to neatly deposit nuclear warheads on Russian leaders with pinpoint accuracy. Might a manned mission to Mars offer a similar Trojan Horse vehicle to develop actual “Star Wars” weaponry, in space, and on the moon? It would explain the ease of getting that funding through Congress. The deep state always gets what the deep state wants.

Regardless, this is an incredibly well-written article, and worth linking for its own sake. I mean:

Like George Lucas preparing to release another awful prequel, NASA is hoping that cool spaceships and nostalgia will be enough to keep everyone from noticing that their story makes no sense.

Don’t Get Involved with Things you Can’t Fix, and You Can’t Fix Stupid

Twenty-odd years ago, I was involved in a Product Data Management system implementation. This is just part of a much larger story, but the salient point from the epic saga is that I worked for a psychopath, and he tried hard at making my life difficult. I never figured out why. I think it was because he blamed me for something my previous boss did to his project. Anyway, we’ll get back to him later.

I was operating as a sysadmin, tasked with ingratiating the main admin from France to install an application on our servers, here in the US. At the time, corporate IT had just made it policy that no one but them could have root on machines hosted in their data center. On Unix (as opposed to Windows), I didn’t mind. That works just fine. However, the other admin had made getting root his #1 requirement. I told him of the policy. He didn’t relent. So I tried to elevate the coming train wreck with my management and everyone in corporate IT, hoping that something could be worked out before he arrived.

The guy shows up, shakes my hand, and asks me for the root password. I get on the phone with the main Unix admin. They finally relent, and allow me (because I’ve known them for 6 years by that point) to sudo to root to setup all the prerequisites.

The other admin is furious, tells us he can’t do anything until he gets root, and goes back to his hotel. Next day. Big meeting. Everyone on the phone. Group in one office, corporate IT in theirs, admin from the hotel, boss in the UK. I ask: “Michael, what specific commands do you need to run as root?” He says — get this — “You get in your car, and you turn the key, and it starts up. You don’t know how; it just works.”

In our room, we all just looked at each other in disbelief. First of all, he was talking to a bunch of mechanical engineers who happened to fall into implementing a PDM project. We all understood exactly how cars work. Second of all, everyone on the call would expect “the expert” at installing the application stack to be able to answer the question.

It was clear there was no arguing about it further, and the project had to get done so that he could shuffle off back to France, so they gave him root, and he did his thing from the hotel, and never spoke to me again.

After all the nonsense, you know what the problem was? The application server was configured to run on port 80, out of the box. That’s it! It assumed it would be running on the standard, privileged port. We could just as easily have configured it to run on port 8000, or port XYZPDQ. It didn’t matter! We had a load balancer running on port 80 in front of it. It could have been any port we wanted! Our “expert” admin couldn’t understand that, and my fearless management wouldn’t hold him accountable for such an elementary understanding of what he was doing.

In the weeks after, I realized that my boss had made me the scapegoat with upper management for the situation, because I was the one that tried to head this disaster off at the pass. Since I had sent emails, and talked about it, apparently I was the one who was causing the problem. This was just one of the many conflicts with my psychopathic boss. I had to learn a lot of hard lessons about politics over the 3 years on that project, but this one backfired in the most unexpected way.

Unfortunately, I had basically the same sort of thing happen again a few years ago. I tried to warn my management that IT was telling me something really, really stupid, and that it was going to come to a head in a spectacular way. But they couldn’t understand anything I was telling them, and trusted that IT knew better than I did. The problem is that IT didn’t want me to be working on the project. They felt they should have been the ones to “get the business” to develop it, and were actively trying to slow me down. Unfortunately, I didn’t learn what else to do in this situation except continue to try to educate the people who are looking at me like I’m crazy. Anyway, maybe I’ll blog that one 20 years from now.

It’s time for some hard truth – YouTube

OMG. How many times can the universe scream at me that I’m in the wrong business?

“Ultra-fast capacitors.” Sigh. I’m no electrical engineer. In fact, I’ll admit that circuits were the worst part of my mechanical engineering studies. However, I do know that capacitors — selected for their specs — can only go at the “speed” constrained by the voltage and amperage of the entire circuit. You don’t get to select the “speed” at which they store and release charge. You can’t just swap out capacitors with different specs, and expect that the circuit will still perform the function it was designed to do. None of the usual, audiophile-type nonsense, like “oxygen-free, demagnetized, free-range, gluten-free, organic dielectric compounds” could even possibly be rationalized here.

The best part of these things, as always, is the slavish commentary claiming to be able to “hear” vast improvements. Now, normally, I’d “tap the sign” about all ratings and review systems being gamed — and I would (no doubt) be right in predicting it here — but I’ve seen enough “audiophile” commentary that I’m absolutely sure that more of it than I would like to admit is, in fact, genuine. Linus addresses this as well, though he much more generous than I would be.

37signals Dev — Vanilla Rails is plenty

In our example, there are no fat models in charge of doing too many things. Recording::Incineration or Recording::Copier are cohesive classes that do one thing. Recording::Copyable adds a high-level #copy_to method to Recording’s public API and keeps the related code and data definitions separated from other Recording responsibilities. Also, notice how this is just good old object orientation with Ruby: inheritance, object composition, and a simple design pattern.

Source: 37signals Dev — Vanilla Rails is plenty

This is an “implementation” of my guiding philosophy of programming:

If you truly understand the process you’re trying to implement, the code will “fall out.”

This article is discussing adding a Rails concern for making ActiveRecord objects copyable and “incineratable,” and then implementing these operations in PORO models. That’s great, but this sort of redirection is only needed to commonize the human-to-machine naming that might be used for different classes in the application. (There’s probably a term for this, but conceptualizing the terminology used in classes and methods is an art unto itself.)

I don’t think I’ve ever written a concern, but, then, I’ve never written a Rails application (out of at least a dozen and a half now), with 500 classes, which would inevitably have some overlap in their “business” functionality. My current app is the most complex thus far, and it only has 52.

If you don’t have that situation, you don’t need this level of abstraction, and, and — and here’s the important part — if you do have that situation, you will find yourself starting to write duplicated code. When this happens, as a programmer, your “spidey sense” should start tingling, and telling you there’s another level of abstraction to implement.

And that’s what I mean about the code “falling out” of implementing the actual process of what you’re trying to program.

I suppose there’s a case to be made here that you might wind up with duplicated code on a large codebase, simply because one programmer didn’t know what another programmer had done, but these kinds of things will happen. Refactoring the duplication, once discovered, is just part of the job.

Fallout 76 | Our Fallout 25th Anniversary celebration concludes with interviews, events and more perks!

Fallout finishes its month-long 25th anniversary celebration strong with a spine-tingling Fallout 76 event, behind-the-scenes looks, in-game rewards and more.

Source: Fallout 76 | Our Fallout 25th Anniversary celebration concludes with interviews, events and more perks!

Buried under the lede is this gem:

Awww, yiss! My playthrough is stuck because of a bug in the unofficial patch, and I can’t finish Nuka World, i.e., the best part of the whole game. I’d be glad to start over on this edition. I’m sure “2023” means something much later than I would like, but at least we finally have a year!

Fake CISO Profiles on LinkedIn Target Fortune 500s

“I shot a note to LinkedIn and said please remove this, and they said, well, we have to contact that person and arbitrate this,” he said. “They gave the guy two weeks and he didn’t respond, so they took it down. But that doesn’t scale, and there needs to be a mechanism where an employer can contact LinkedIn and have these fake profiles taken down in less than two weeks.”

Source: Fake CISO Profiles on LinkedIn Target Fortune 500s

Allowing companies to take down profiles they don’t like sounds exactly like something Microsoft would be all about.

Cleaning the Griddle

Griddle Brick

When I “went to college” at Purdue, I stayed in the dorm all 4 years. What can I say? I liked the convenience of someone else cleaning the bathrooms and doing the cooking. For freshman year — and second half of senior year, because I had such a light schedule — I worked in the kitchen, for fun and profit. I usually ran the grill and deep fryers. I have a knack for keeping track of time in my head, and I almost never (like, only once) ever burned food.

After working a supper shift, everyone had a cleaning job. If you ran the grill, of course, it was to clean it. They had these “bricks” to help with the job. (I’ve attached a screenshot of one from Amazon, but that price seems high. I’m sure you could do much better from some commercial kitchen supply place.) Anyway, the first time I had to do it, it was explained to me by a shift supervisor that this was a hard job, and it took most people 2-3 hours to do, and they gave me one of these griddle bricks to help.

The brick they gave me was worn down, and literally caked with grease. All the little pores that you can see in the picture were clogged. The front of the thing looked smooth. I started scraping with it, and noticed that, while the thing was very hard, it was also brittle. I noticed that you could “crunch” the brick if you leaned on the edge. This would expose a new “row” of pore edges to actually scrape gunk off the grill. Once I figured this out, I used a spatula to shave off all the clogged part of the brick, and figured out a technique of very slowly rotating the brick, while putting all my weight on the edge. This move kept gradually exposing a new set of “teeth” as I worked the brick, and cleaned the grill. In direct opposition of what I had just been told, it worked amazingly well.

On my first attempt, I think I finished in about 45 minutes. The supervisor was incredulous. But she looked at the grill, and admitted she had never seen it so clean, and I clocked out.

The next time I cleaned the grill, I had mastered my technique, and I was done in 15 minutes. However, I had used up a good portion of the brick. About half to three quarters was ground off during the process. I figured, hey, that’s what they were for, right? Wrong.

The supervisor was angry this time. These bricks cost a dollar apiece! I couldn’t just use one up every night! Granted, minimum wage at the time was $3.15, so this seemed like a bigger deal then. But I just asked, would they rather pay me for 3 hours of work, and spend $10 on labor, or pay me $1 for 20 minutes, and ¢75 for the brick? Well, at least she could see the math, and left me alone about it.

I had to explain this a couple more times to other managers. However, I couldn’t manage to impart my technique to anyone else, so others continued to struggle with the job.

I have no idea why I’m thinking about this today, or why I feel compelled to write about it.

Postscript: Amazon listings are really, really stupid sometimes. This copy says the bricks cleans the grill without abrasives. LOLWUT? This brick is the most abrasive thing in the world. That’s why it works. There’s also a lifetime guarantee. I have no idea how someone could ever put that on one of these, and I can’t imagine trying to collect when you figure out that these things are expendable. Truly mystifying.

Something is wrong on the internet | by James Bridle | Medium

This, I think, is my point: The system is complicit in the abuse.

And right now, right here, YouTube and Google are complicit in that system. The architecture they have built to extract the maximum revenue from online video is being hacked by persons unknown to abuse children, perhaps not even deliberately, but at a massive scale. I believe they have an absolute responsibility to deal with this, just as they have a responsibility to deal with the radicalisation of (mostly) young (mostly) men via extremist videos — of any political persuasion. They have so far showed absolutely no inclination to do this, which is in itself despicable. However, a huge part of my troubled response to this issue is that I have no idea how they can respond without shutting down the service itself, and most systems which resemble it. We have built a world which operates at scale, where human oversight is simply impossible, and no manner of inhuman oversight will counter most of the examples I’ve used in this essay. The asides I’ve kept in parentheses throughout, if expanded upon, would allow one with minimal effort to rewrite everything I’ve said, with very little effort, to be not about child abuse, but about white nationalism, about violent religious ideologies, about fake news, about climate denialism, about 9/11 conspiracies.

Source: Something is wrong on the internet | by James Bridle | Medium

(Emphasis mine.)

This is simply not true. It’s not true at all. Google made 85 BILLION dollars last year. They absolutely, positively, unquestionably can invest in some more machines to flag more types of content, and hire people to review the flags.

And don’t try to tell me they couldn’t programmatically de-list the kinds of accounts that are pumping out the kind of generative garbage described in the article. I could write a 100-line Perl script to catch this. It’s like the argument about how the App Store is so big that Apple couldn’t possibly catch all the fraudulent apps, but one guy looking at it in his spare time has identified scores of easily-caught problems that scam hundreds of millions of dollars out of the ecosystem.

At the end of the day, it’s a problem with misaligned incentives. Just like with Apple and the App Store, Google doesn’t want to fix the problem, because they benefit from the algorithmic/generative advertisement click-bait fraud scheme made possible by their platform being “game-able.” Corporations being the beasts they are, the only way to solve this problem is through legislation. Unfortunately, campaign finance laws being the beasts they are, that’s not going to happen.

And, as if on cue:

Zhukov’s trial established how the trade in fake clicks works. Between 2014 and 2016, the so-called King of Fraud—a name he gave himself in a text message, revealed in court—ran an advertising network called Media Methane, which received payments from other advertising networks in return for placing brand’s adverts on websites. But the company did not place those adverts on real websites. Instead it created fake ones, spoofing more than 6,000 domains. It then rented 2,000 computer servers in Texas and Amsterdam and programmed them to simulate the way a human would act on a website—using a fake mouse to scroll the fake website and falsely appearing to be signed in to Facebook.

Source: How Bots Corrupted Advertising | WIRED

Click fraud has been around since the rise of Google, but I guess everyone collectively agreed to ignore it as a cost of doing business, like “shrinkage” in retail. It stands to reason that these efforts have gone full-blown industrial now, and surely must be making a dent in someone’s pocketbook, but I guess everyone in the advertising economy is too entrenched now to do anything different. Advertising may be the single biggest sector in the American economy at this point. So they go after one dude, and make an example of him, meanwhile, the algorithmically-generated advertisement-bait is considered legitimate.

“Algorithms” are ruining everything that made pop culture interesting.