My House Obsession

As of this writing there is a house for sale here in Boulder that I am slightly obsessed with. It is located at 2595 Glenwood Dr (Redfin link) in North Boulder. We live in South Boulder, but this house is only a few blocks away from where we lived when we first moved to Boulder. It's also on one of the primary routes I use to go North/South through Boulder on my bike, and I've had a chance to keep an eye on the location for years. I am not obsessed about it because I like the house in particular. It's in a fairly typical modern style with multiple exterior materials and way too much white paint inside. It's nice enough, and I assume well built, but it's nothing unique nor special. I'm obsessed because I find the history of the property fascinating. I find it fascinating because the people involved have been delusional for years about the worth of this location. In another universe, if the sellers had been less delusional, we might have thought about building a house on it. I'm also obsessed because I have a small bet with myself of the price at which the house will eventually be sold, and I want to be correct.

My House Front View

There is some property history in that Redfin link above, but I'm not sure it's complete for the story I want to tell. The earliest record is December, 2022, but I think my obsession started even before that. Below is my best recollection of the order of events:

  • The lot the house sits on used to be part of a larger lot for the house next door, immediately to the West. Some time before December, 2022, the owners of that property put the East half of their property on the market. At the time that house had an extension that impinged on the proposed East lot. The property listing indicated that this extension would be demolished, but the upshot is they were trying to sell a piece of property that wasn't ready to be sold and developed. This is the first indication that the sellers were delusional. Delusion count = 1.
  • If my memory is correct, they first asked for $750,000. At the time, even during the height of COVID house pricing mania, the most expensive houses in that neighborhood were in the $1 to $1.2 million range. This means that to build a house that was in line with the neighborhood, a builder would have less than $500,000 to spend. This was not realistic, meaning that any house built would have to be the most expensive in the neighborhood by a large amount. Delusion count = 2.
  • Needless to say, the lot didn't sell. I think during this period the extension was demolished and the original house remodeled.
  • After some time, the lot listing was updated, but this time it included architectural designs and a $150,000 price increase to $900,000. I think this brings us to December, 2022. To summarize, the lot didn't sell at $750,000, and they thought that including plans for the house they wanted next to them justified increasing the price by $150,000. Maybe they thought that following through on the extension demolition justified part of the price increase? Delusion count = 3.
  • The property was eventually sold for $500,000 in June, 2024, a year and a half later. Redfin doesn't show this sale, but it's available on the Boulder County Property Assessor Site. The current owner is "GLENWOOD SPEC LLC," which is almost certainly an LLC created for the sole purpose of building and selling a home on this lot. There's probably nothing shady nor unusual about this, but it is interesting.
  • Apparently the build went fast enough that it was put on the market in February, 2025, for $3,250,000. This is roughly three times the price of any other home in the neighborhood. It is quite a bit bigger than nearby houses, and newer (of course), but it is on a small corner lot next to a semi-busy road which are demerits that other homes do not have. Delusion count = 4.
  • It was at this point I made the bet with myself that the house would sell for no more than 70% of this price, or around $2.3 million. By the way, I cannot be sure, but I think that the real house is nothing like the design that was being sold for $150,000.
  • In the (almost) year since, they have slowly dropped the price. They dropped it by $300,000 after three months of not selling1, but since then by increasingly smaller amounts, with the most recent drop a few days ago of just $30,000, bringing the latest asking price to $2,570,000. It looks like they got a bite in August, 2025, but it didn't go through. It's almost certain that in the last year the owners have been paying construction loans, insurance, and property taxes that I'm sure are at least $10,000 a month2. This means that each month that goes by that the house remains unsold, they are effectively lowering the their profit by $10,000 by being stubborn. I'm going to count this slow pace of price reductions and stubbornness as Delusion count = 5.
  • This brings us today, and if my bet is correct, still nearly another $300,000 to drop until it's sold. And I could be wrong about my bet by being too conservative. The actual sale price could be lower than that!

Would I buy this house? Certainly not for $2.3 million. There is a price I would pay, but it's a price the seller would never accept. The lot is really small, and I don't think I would want to live next to the semi-busy road. I do like the neighborhood, it's a few blocks to the nearest grocery store and a short bike ride to downtown. We would like a bigger house, but I don't think this is for us.

Finally, here are two regressions predicting when the house might sell. First, if we assume a linear pace of price reductions with respect to time, the $2.3 million price I predict it will sell for will be reached at about 420 days after the initial listing. That's around three months from now.

Linear Regression House Ask

The sellers probably don't want to sell the house at a loss, meaning that there's probably a floor below which they really don't want to sell. In that case, the ask price might be following a 1/x-like curve. Using that to fit, I predict it their price floor is $2.26 million and it will take around 3615 days to reach $2.3 million, which is about 9 years from now. If this fit is correct (it probably isn't and and I hope it isn't for their sake), they would spend over $1 million on taxes, insurance, and the construction loan before the house sells, which would likely be a massive loss. Which is why this fit is almost certainly baloney!

1/x Regression House Ask

You can be sure I'll keep an eye on this property. Will the seller find new ways to be delusional? I'm excited to find out! I'll follow up even if I end up being wrong and the house is sold for more than $2.3 million.

  1. This is just shy of what we paid for our whole house! ↩︎
  2. If not higher. Construction loan rates are usually 2-3% higher than conventional mortgage rates ↩︎

Bush - Sixteen Stone

If you're not from England, I'll bet you didn't know that 16 stone is equal to 224 lbs.

Futurama Neutral

Early 1996 must have been an exceptionally slow time for new music because this week's album, Sixteen Stone by the English rock band Bush, took roughly a year to reach #10 on the sales chart. This album featured several singles I remember hearing contemporaneously including Comedown, Glycerine, and Everything Zen.

I have listened to Bush a decent amount, but interestingly I have listened to a different (and less popular) Bush album The Science of Things more than Sixteen Stone. My guess is that most of these The Science of Things listens are from before I subscribed to a music streaming service and I listened to that album because it's what was in my collection. Having listened to both albums a fair amount, I agree with the consensus that Sixteen Stone is the better album. I do recommend giving it a spin.


My Listening to Music in 2025

For the second year in a row, I managed to increase the number of music tracks I listened to year-over-year. It's the most I've listened to in ten years.

Scrobbles by year

Like last year last.fm offers a year in review, and it looks like they made some changes. One thing that stood out to me is my "Music By Decade" bar chart, copied below. If you ask people about when their favorite music is from, or equivalently when the best music was made in their opinion, they generally pick music from their teenaged years1. I won't deny that a good fraction of my favorite music is from my teenaged years, but I make a concerted effort to listen to new music. The chart below shows that the plurality of the tracks I listened to in 2025 are from the current decade, which shows I followed through on this effort. Even listening to 30 year old albums nearly every week hasn't tipped the distribution heavily to the 1990s.

Scrobbles by year

Let's see if I continue to listen to more music, and new music too, in 2026!

  1. see the chart at the bottom of this post ↩︎

My Reading in 2025

I'm not sure it was a New Year's resolution, exactly, but in 2025 I tried and (I would say) succeeded in reading more books than I have for a long time. Early in the year I signed up for goodreads1 and dutifully kept track of all2 the books I read throughout the year. According to goodreads, I read 40 books, but I think there was one I didn't finish because I disliked it, so the real total is 39.

Nearly every book I checked out from the library. My local library branch is just a few blocks away from my home, and except for the most popular or newest titles, I can request a book to be put on hold and it will be delivered to the branch within a few days. That's just about as fast as ordering the book online and way cheaper. A decent chunk of my property tax goes to the library, so I figure I might as well take advantage of it!

Some highlights:

I think the pace of my reading was just about right. It didn't feel like I had to force myself to read, and nearly all the books I enjoyed. I hope to achieve a similar amount of reading in 2026. You can see my year in review here.

  1. Yeah, I know it's an Amazon product, but I've already admitted to hypocrisy when it comes to Amazon. Besides, I'm not giving Amazon any money here, just a lil' bit of not very interesting data. ↩︎
  2. I didn't track the children's books I read out loud at bedtime, which probably added at least one per week. ↩︎

My Cycling in 2025

It's time for some end of year recap posts, starting with my year in cycling 2025. A year ago I posted a recap of my year in cycling for 2024 which covered a wide range of topics including a short history of my bikes, cyclo-computers, and overall trends over two decades. This year's update is much more brief: I've updated the plots I made a year ago with another column for 2025. Here are some highlights:

  • I went on 164 rides, which is 31 more times than I ever have previously1. To put this in perspective, in 2018 I rode my bike only 22 times total
  • My total distance of 6,735 KM and time of 277 hours is second only to 2011, which was pre-children. This even bested two years (2006, 2007) when I was still a graduate student and was actively participating in races
  • I climbed over 80,000 meters, which is my most ever2
  • I climbed 3.5% more meters per KM than last year, but my average speed was basically the same, which I think is cool

I don't know if I can practically beat some of the maxes above unless I make some life changes, such as moving somewhere without four seasons, or convincing someone to pay me to ride my bike (which in all honesty, no one should). Overall, I feel pretty great about my cycling in 2025, and my goal for 2026 is to achieve about the same results.

  1. It's likely that my true maximum(s) are higher, such as my gap year between college and graduate school when I was basically unemployed, didn't have a girlfriend, had no responsibilities, and rode my bike a bunch. But I don't have records from then ↩︎
  2. I did some of this analysis a few days ago and realized I was this close 🤏🏼 to besting my previous climbing max, so I purposely did a hill repeat ride to close out the year ↩︎

RadioX Streamer Update

RadioX

There is no new album in the top ten this week, nor next week. The holidays seems like a bad time to release a new album.

In the meantime, an application named RadioX I wrote about previously was updated (not sure when) and now includes automatic last.fm scrobbling that saves a play per each new song. Previously you had to manually click a button with each new song to scrobble. This was really the only feature missing from it and now it's basically perfect. I suggest you check it out!


Alan Jackson - The Greatest Hits Collection

I'm going to keep this short. The #8 album this week is a greatest hits collection of Alan Jackson songs. I've reviewed one of his albums before, which I gave a big ol' "meh." This gets a big ol' "meh," too.


Mannheim Steamroller - Christmas In The Aire

This week the top un-reviewed album is not Christmas In The Aire by Mannheim Steamroller, it is a Garth Brooks album. As I've already discussed, Garth Brooks doesn't allow his catalog on most music streaming services. I don't care enough about Garth Brooks to expend effort to listen to him, so we'll drop down to the #4 album.

I was getting a little bit worried that I would not have any Christmas music to review this year. We're just over a week away from Christmas and by this time last year I had reviewed two (1, 2) Christmas albums. And whoo boy is this some kind of Christmas music.

I wasn't previously familiar with Mannheim Steamroller, or at least if I've heard their music before, I never made the name association. Outside of deliberate parodies or joke albums (like this K9 Tunes Christmas album, which looks like it's AI generated, so it's triply bad), this is the absolute worst kind of Christmas music. It is highly electronic, uninspired, and super lame. It's the kind of music that would be played in a 1980s comedy film that takes place at a ski resort during the slapstick scenes of people falling down the mountain. Just simply horrible. Avoid this album, and I suspect everything Mannheim Steamroller has ever recorded. I'm not brave enough to find out by listening to more of their albums.

I want to mention the use of the extra "e" on "Aire." It appears that Mannheim Steamroller uses "Aire" as their virtual trademark. Many of their albums are named "Fresh Aire." I'm glad this differentiates from the better NPR Fresh Air program, but the extra "e" is still stupid. It reminds me of a local road here in Boulder I often ride on my bike, Olde Stage Road. There's a sign near the road that omitted the extra "e" (because, duh, it makes no sense) which some concerned citizen fixed in the most hilarious and cheap way possible:

Olde Stage Road


The Beatles - Anthology 1

The most remarkable thing about this week's #1 album is that it featured the first new Beatles song in 25 years, Free as a Bird. The rest of the album are outtakes, live versions, and short spoken interview quotes by people in and associated with the band.

Like the last Beatles album I reviewed this is not for casual listening and clocks in at over two hours long. You would not want this to be your introduction to The Beatles. In fact, I would urge anyone to listen to the entirety of their catalog before listening to any of the Anthology albums. It's important to have the context of what these songs ultimately sounded like when The Beatles perfected them.

This is not an album that needs to be listened to often and repeatedly. There's a reason why The Beatles worked as hard as they did to perfect their songs, and the polished versions are the what you'll want to hear again and again. In the end, because it's the freakin' Beatles, it is worth checking out when you're in the right mood and have the time.


Waiting To Exhale Soundtrack

This week's top un-reviewed album is at #1 by R. Kelly, and I'm not going to review that for obvious reasons. Instead I'll drop down to #4 for the soundtrack for the film Waiting to Exhale.

If you know anything about me, you should be able to guess that I have never seen the movie, and as far as I know I have never listened to the soundtrack before. The soundtrack is full of heavy-hitters like Whitney Houston, Toni Braxton, Arethra Franklin, and Mary J Blidge. Combined with the popularity of the movie, the album sold quite well, was reviewed positively, and won multiple awards.

I think I kinda sorta remember hearing a few of the songs off the album, but it's been years since I heard any of them. This is one of those albums that I can tell has quality music but I don't care about it. It does nothing for me so I can only offer a neutral opinion with no suggestion one way or the other.

Futurama Neutral


ETL Pipeline Improvements

One of my primary responsibilities at my current job is ownership of the ETL pipeline that brings in the data upon which we run our business. Every day it processes hundreds of gigabytes of data, cleaning and normalizing it, and outputting it in several different forms.

For a few years I have been using a Hadoop-based mrjob pipeline on top of AWS EMR. It replaced a (I kid you not) Redshift-based pipeline that was hugely expensive (that I didn't write). It's been very reliable. For the last year or two the only failures have been when AWS's systems have had issues, something I can't do anything about. Despite this reliability, it hasn't been perfect. The biggest problem with it is that it's expensive to run. There are few reasons why it's been so expensive:

  • EMR adds a 25% upcharge on all resources used. It is reasonable for AWS to charge something because EMR is a useful service with added value. However, in my opinion, 25% is too high for what it does. AWS already gets paid plenty because you're using their other services underneath it (mainly EC2 virtual computers and EBS block storage), so the extra mark-up feels like a big cash-grab. It is, of course, entirely possible to run Hadoop on AWS without using EMR. But it is a hassle, and it's likely that AWS has figured out that 25% is the inflection point between too expensive and not worth the hassle

  • Hadoop isn't the most efficient way of doing things. Newer tools, notably Spark, have surpassed Hadoop in both speed and features. I originally used Hadoop because I wasn't happy with how Spark needed quite a bit more memory than Hadoop for a similar operation, but over time I became less and less satisfied with my inability to speed up certain parts of the process

  • AWS offers spot compute instances, which are virtual computers that are significantly cheaper than on-demand instances. The difference is that on-demand nodes are yours as long as you want them, while spot instances can be taken away at any time with only two minutes warning.

    One of Hadoop's killer features is that during a MapReduce cycle, if one or more worker nodes goes away (for whatever reason including spot node removal), in most cases it can recover and redo any lost results.

    However, the EMR pipeline used a multi-step MapReduce process. Unfortunately, Hadoop cannot recover lost results from earlier, fully completed MapReduce cycles. This means that in order to run the pipeline, it had to have enough on-demand instances that the data that needed to be preserved across cycles fit on them. This raised the cost considerably when compared to an all-spot pipeline run

Earlier this year I decided it was time to start looking at how to rewrite the pipeline to dramatically lower costs. I saw two possible ways forward:

  • Choose a modern, high performance tool like Spark/PySpark or Dask that would still run on EMR but hopefully would be much faster

  • Abandon EMR entirely (and its 25% surcharge) and write something that could run completely on spot instances and use S3 for storage, which is four to five times cheaper than EBS

After some thought, I decided that the second option was the better choice. If I could figure it out, it offered the best possible outcome. The pipeline runs once per day, meaning that it has 24 hours to finish before the next run needs to start. Ultimately, high performance was less important than lowering costs.

I have been using Polars quite a bit and have been (mostly1) impressed with its speed and functionality. It is written in Rust, which is one of the fastest programming languages. Polars has a Python-facing API as a first-class member of the project. Rust has an ever-growing library of packages that I've found are high-quality and well documented (in contrast to my experience with cough R packages). The crucial difference between Polars and Hadoop/Spark/Dask is that Polars runs on only one node at a time (it can and does use all the CPU cores), while all of the others can run on multiple nodes. If I could figure out how to slice up the pipeline into chunks that would work on separate instances, I believed I could use Polars in place of Hadoop.

Jumping to the end of the story, I was able to convert the pipeline to use Polars, and to great success. I use a simple pattern for each step. An orchestration process builds a list of work which is submitted to SQS. An EC2 spot fleet is created which launches workers that consume the work. The input and output of the work is stored on S3. The workers send success or failure messages to a callback queue on SQS, which is monitored by the orchestration process. If a spot node goes away interrupting work, the work will be picked up by a different node once the message returns to availability. Once the work is done for a step (i.e. all work has generated a callback), the orchestration process kills the spot fleet and continues to the next step (or sends an error message for a human to figure out).

The bottom line is that the cost has dropped by roughly 85%, primarily due to the following reasons:

  • Polars has the concept of LazyFrames which are data objects that are not realized in memory nor computation until Polars is told to do so. Operations and filtering can be applied to them and Polars can do the work in parallel with efficiency tricks that overall increases speed without loading the whole dataset into memory at once. The combination of sink_parquet with PartitionByKey is effectively a MapReduce operation that is much faster than Hadoop on similar hardware

  • AWS has "regions" and "availability zones (AZs)" which are the physical locations where cloud compute happens. Each AZ is a distinct data center (or close by data centers) within a region. When running an EMR job, you are restricted to a single AZ largely because AWS charges for cross-AZ data transfer, and EMR jobs are very loquacious across the network. There's also increased network latency between AZs. Running and EMR job in multiple AZs would hugely impact performance and cost.

    Because the new pipeline reads from and saves data to S3, and there are no cross-AZ charges for accessing S3 within a region, it doesn't matter which AZ the workers run in. This means that the spot fleets can target all AZs within the region, unlike EMR

  • When launching an EC2 fleet, you must specify one or more launch templates, which describe how to launch each instance in terms of OS and installed software. AWS EC2 offers instances using x86 processors from Intel and AMD, and ARM instances using AWS Graviton processors. Conveniently, the pipeline doesn't require any processor-specific features. Therefore, I created two AMIs, one for each of x86 and ARM, which allows the spot fleet to target any and all of Intel, AMD, and Graviton instances

  • The pipeline requirements for each step basically comes down to number of CPU cores and amount of RAM, more or less of each depending on what the step is doing. The upshot of all of the above is that for a given step all the pipeline cares about is the resources of the node, not what kind of node it is. Of course, not all instances are the same speed, but the cost of an instance is roughly proportional with its speed, so it all works out. This means that for a given step, across all AZs and EC2 instance types, there can be over 100 distinct resource combinations to pick from. This basically guarantees spot availability at all times

  • The pipeline uses a fair number of User Defined Functions. Polars supports UDFs written in Python, Numba, and Rust using PyO3. By using the latter two, basically all of the inner loops and heavy computation in the pipeline happens in compiled C or Rust. This, in my opinion, is a really nice way of doing things. Let Python handle moving data around and high-level stuff, and run all the heavy computation in compiled code.

Overall, I'm very pleased about the results of this work. The goal was to save money, and it has done that. I wasn't expecting 85% savings (I'm not sure what I was hoping for), but I feel quite good about that.

  1. Polars definitely has some frustrating issues, like this one or this one ↩︎

Alice in Chains - Alice in Chains

It is unfortunate that I had a thirteen year gap in my listening project. If I had not stopped, or restarted earlier, I might have reviewed Alice in Chains two best recordings: the 1992 album Dirt and the 1994 EP Jar of Flies. They are two of my favorite albums/EPs; I have listened to tracks off them nearly 700 times.

In stark contrast, I have barely listened to the eponymous Alice in Chains album. Listening to this album again I am not upset at my lack of plays. It is nowhere near as good as Dirt and Jar of Flies. On Tidal, of the top thirty tracks by plays for Alice in Chains, only three songs are off the album Alice in Chains. I do not have a Spotify account so I can only see the top ten, but there are zero songs off Alice in Chains on that list.

I can't endorse listening to the album Alice in Chains, but I can strongly suggest giving Dirt and Jar of Flies a play. Dirt, in particular, is one of the greatest grunge albums of all time. The first chord and lyric of first track, Them Bones, hits you hard and grabs your attention like few other songs do.


Tha Dogg Pound - Dogg Food

This week's #1 album, Dogg Food, is by the Snoop (Doggy) Dogg-adjacent Tha Dogg Pound. Apparently the album has done quite well, having sold over two million copies as of 1996. Despite this success, I can't say that I recall ever hearing any of the songs before.

I have no strong opinions about this album. It's pretty typical hip hop from the era, nothing outstanding, and nothing horrible. It didn't grab my attention at all. I'll probably never listen to it again.


Smashing Pumpkins - Mellon Collie and the Infinite Sadness

This week (thirty years ago) one of the all time great albums hit #1 on the charts in its first week of sales. It would go on to sell over 10 million copies, making it one of the best selling albums of all time. And deservedly so. Mellon Collie and the Infinite Sadness by The Smashing Pumpkins is one of my favorite albums. According to last.fm, I have played a song off the album over 600 times.

I remember when the lead single off the album, Bullet with Butterfly Wings, hit the radio. Of course it's a banger, but what I remember thirty years later is mishearing the line "despite all my rage, I am still just a rat in a cage." I could make out the "despite all my rage" part, but I couldn't quite figure out the second half. I think I had some nonsense words there, but it's been so long since I learned the correct words that I've forgotten what I thought the words were!

Another single off the album, 1979, was cool because that's the year I was born. The song is about entering adolescence, and in 1995 I was in the throes of adolescence myself. A great coincidence!

All of the big singles off this album still get plenty of radio plays, but the whole album deserves to be listened to in its entirety. I did not find listening to this album another time for this project a chore or unpleasant. Indeed, it was a pleasure, and I look forward to listening to the album again and again for years to come. You should listen to the album today, and again and again.


Agument Code

Last month the company I work for purchased a subscription to Augment Code for all its developers. Augment Code is an AI coding engine that is broadly similar to Claude Code, which I played around with a few months ago. You can read the linked post, but the summary is that I came away mostly skeptical about AI coding. However, I am not a luddite, and am willing to learn new tools and try things again, so I installed the Visual Studio Code Augment plugin and have been giving AI coding another shot.

What I've learned is that AI coding agents are more useful than I gave them credit previously if you give them small enough jobs that are well-defined. Asking it to do too much, which perhaps I did in my earlier blog post, is not (yet) what it's good at. Augment code has a few modes. There is a chat box in which you can conversationally interact with the AI, asking it questions or giving it tasks to complete. The agent will also give autocomplete suggestions as you type new code, which I'd say is mostly helpful, but not always. Sometimes the suggestions are just plain wrong, but a few times the suggestions have been subtly wrong, which is dangerous.

Here are some example tasks that Augment Code has been at least 90% successful at:

  • In one of my Python projects, I asked it to create a file that tries to import all the packages used in the project, and output which packages do and do not import successfully. This project uses a few custom & private packages I keep elsewhere, so a requirements.txt file doesn't work with pip. Having a file I can run to quickly check I installed all the packages is useful. It did a pretty good job of this, but it did miss one import from one file, probably because the import wasn't near the top of the file (which is an anti-pattern, but it's there for reasons)
  • It is quite good at adding type hinting to Python projects. You can ask it to "add type hinting to all functions in all Python files in this directory" and it does it
  • I have a PyO3 project that it successfully threaded/parallelized using Rayon. I had to be very specific about how the inputs and outputs would change, but with that it did a good job. It wasn't perfect. Instead of using lightweight vector slices, it was creating new vectors for each chunk of parallel work, which involves copying memory. When I suggested a change, it did a good job fixing that oversight
  • I am an unapologetic user of Mercurial. The rest of the world uses Git. Kind of like Mac and Windows, Mercurial and Git are 99% the same in what you can do with them, but they differ in methods and style. In fact, there is a Mercurial plugin that allows perfect 1:1 interoperability between the two, which I use. Sometimes I don't want to bother installing Mercurial on a temporary virtual instance that already has Git installed, and I want to do something quick in Git. I've asked the AI agent to translate a Mercurial command to Git and it's done a fine job

It appears that my company has a (grandfathered) $50/mo/user plan, which has jumped to $60/mo/user now for new purchases. I would say that it has saved me enough time to justify that price. The real question is how much the service actually costs? The AI industry is spending so much money that "to recoup their existing and announced investments, AI companies will have to bring in $2 trillion (every 2-3 years), more than the combined revenue of Amazon, Google, Microsoft, Apple, Nvidia and Meta." It feels like the early days of Uber, where the fares were subsidized by venture capital, and once most competitors were vanquished, prices went up. To reach $2 trillion in revenue, how high would prices have to go? There are currently a bit over 8 billion humans on earth, which means that over 3 years, the whole AI industry will need to take in about $80 from each and every person on earth per year. That is a ridiculous number and will not be reached any time soon.

My opinion is that AI does has some value, but not nearly as much as it costs in real terms. I'll use it if it works for me. But I won't rely on it.