The Brief / Issue 001

Rehrig Pacific's AI Read 14,000 Pages of Its Own Binders Before It Ever Talked to a Customer

A 112-year-old plastics manufacturer in Los Angeles made the contrarian first AI move of 2025.

Operator Will Rehrig, CEO, Rehrig Pacific
Revenue ~$410M
Industry Plastics manufacturing / Returnable logistics
Published January 15, 2026 13 min read

THE CRAFT

Will Rehrig indexed 14,000 pages of internal technical documentation into a GenAI chatbot before Rehrig Pacific built a single customer-facing AI feature.

Think about that sentence for a second. Rehrig Pacific is a 112-year-old, $410-million, fourth-generation family plastics manufacturer in Los Angeles. They make the reusable pallets under your grocery distributor’s trucks and the waste carts at the end of your neighbors’ driveways. In 2024, when they sat down to make their first serious AI investment, they had every consultant, every vendor, and every LinkedIn thought-leader in the industry telling them the same thing: start with the customer. Put AI in the product. Build a chatbot on the website. Automate the contact form. Point the first dollar outward, toward revenue, toward the thing that shows up in a board slide.

Will Rehrig pointed the first dollar inward. At his own engineers. At his own binders. At the tribal knowledge locked inside the heads of the people running his plant floors — most of whom are a decade or less from retirement.

The early numbers are quiet but real, and the decision is one I want every founder-CEO in this audience studying — because it is the cleanest operator story I have seen this year for how a $100-million to $500-million traditional-industry business should think about a first AI deployment. It is also the exact opposite of what the vendor pitches are telling you to do. Which, if you’ve been reading this newsletter for any length of time, is usually a signal that the story is worth the next 13 minutes of your morning.

THE OPERATOR

The situation

Start with the shape of the business, because it tells you why the decisions downstream make sense.

Rehrig Pacific is 112 years old. Fourth-generation family ownership. Will Rehrig has been president since 2005 and CEO through the transition. The product catalog is physical and industrial: returnable plastic pallets for grocery and beverage distribution, crates for direct-store-delivery fleets, the ninety-gallon waste and recycling carts that municipalities buy by the tens of thousands. This is a plastics manufacturing and logistics business, not a tech business. The margins are thin, the competitors are global, and the customers are large, price-sensitive, and increasingly demanding real-time visibility into where their assets are and what shape they’re in.

Three pressures converged on Rehrig in the early 2020s. The first was labor. Warehouse picking, pallet auditing, and loading bay work are labor-intensive, error-prone, and getting more expensive every year. The second was customer expectation. Municipal customers and beverage distributors were starting to ask for SKU-level visibility, route-level telemetry, and order accuracy SLAs that legacy dashboards built in React couldn’t deliver in anything under a months-long engineering cycle. The third pressure is the one nobody talks about at AI conferences, and it’s the one that I think drove the most important decision Rehrig made: a 112-year-old manufacturer has a lot of tribal knowledge walking toward the exit.

Think about what that actually means. Decades of machine specs, procedure notes, troubleshooting checklists, tolerance tables, and hard-won operational lessons — most of it sitting in three-ring binders on plant floors or in the heads of engineers who started there in 1987. When those people retire, that knowledge goes with them. If you’re Will Rehrig, sitting on top of a business where the machines require specific, idiosyncratic expertise to run, that’s not a theoretical risk. That’s a cliff.

So in 2023 Rehrig started moving. And where they pointed the first move is the part I want you to pay attention to.

The move

Rehrig did three things between 2023 and 2025. I’m going to walk through them in the order they matter for a founder-CEO reading this, not chronologically.

One: they indexed 14,000 pages of internal technical documentation into a GenAI chatbot before they built a single customer-facing AI feature.

That sentence is the whole issue. Read it again.

Per a June 2025 Fortune profile, Rehrig trained a generative AI chatbot on more than 14,000 pages of internal technical reference material — machine specs, engineering documents, plant floor procedures, the accumulated operational knowledge of a century-old manufacturer. They deployed it to factory floor engineers through a smartphone app. Instead of flipping through binders to troubleshoot a machine, an engineer can now pull out their phone, ask a question in plain English, and get the answer pulled from a hundred years of institutional memory.

There is no press release saying this, but you don’t need one to see it: this is a knowledge-preservation play dressed up as an AI deployment. Rehrig spent their first real GenAI cycle converting tribal knowledge into a queryable corpus. The chatbot is a delivery mechanism. The actual move was the digitization of operational memory while the people who hold it were still on the payroll.

Two: they partnered with TensorIoT to build a real-time data platform on AWS.

TensorIoT is an AWS Advanced Tier partner that specializes in IoT and GenAI infrastructure. Rehrig brought them in to solve a specific problem: the company was generating large volumes of real-time data from two very different sources — garbage truck fleet telemetry (the physical assets out in the world) and Salesforce CRM data (the customer and order layer) — and had no unified way to query, visualize, or act on it. Per TensorIoT’s public case study, the platform they built for Rehrig is architected to render 50,000 data points in under ten milliseconds. Mobile app and web UI for internal teams and customer-facing dashboards.

The decision to partner, rather than build in-house, is the part I’d circle. Rehrig is a manufacturer. They are not, and should not become, a data platform engineering shop. Bringing in a specialist partner who lives on AWS full-time bought them speed, competence, and the ability to keep their own engineering headcount focused on the things only Rehrig can do.

Three: they killed custom React dashboards and moved analytics to Amazon QuickSight with Amazon Q.

This is the move that sounds the smallest and matters a lot more than it looks. Per the AWS Business Intelligence blog, Rehrig’s previous approach to building customer-facing dashboards was custom React development — and it was taking weeks to months to ship a single dashboard. After the QuickSight migration, they stood up their first dashboard in a two-week sprint. Amazon Q, the GenAI layer on top of QuickSight, gave non-technical users the ability to ask conversational questions of the data.

In isolation, this is a BI tooling story. In the context of the other two moves, it’s a pattern: Rehrig systematically killed the places where custom code was a tax on speed, and replaced them with managed services plus a layer of AI on top. Custom dashboards became QuickSight plus Q. Custom data platform engineering became TensorIoT plus AWS. The places they kept custom were the places where their domain expertise was actually the product — the Vision Object Recognition AI on the warehouse floor, the integration with their own fleet telemetry, the training data curation for the chatbot.

The result

Rehrig is not a public company and doesn’t publish an AI scorecard. What we have is a collection of operator-level data points from Fortune, AWS, TensorIoT, and a Reusables Alliance case study on one of their customer deployments.

The Vision Object Recognition system — deployed in 2024 across warehouse and direct-store-delivery operations — has measurable outcomes from the Adams Beverage customer deployment: 24,000 pallets wrapped and audited, 20 seconds of labor saved per pallet, 133+ cumulative hours of labor savings, and a reduction in the baseline 5% order picker error rate. The wrapping and audit process that used to take four minutes per pallet was compressed materially. These are small numbers per transaction and big numbers at volume — which is exactly the shape you want in a warehouse labor story.

The QuickSight move collapsed dashboard deployment from a months-long React cycle to a two-week sprint. AWS’s own published case study confirms the first dashboard shipped in two weeks. That’s a real reduction in engineering drag.

The GenAI chatbot’s adoption metrics are not public. This is an honest gap in the story, and I want to flag it directly: we don’t know how many Rehrig engineers actually use the chatbot daily, how accurate the answers are, or what the hallucination rate looks like on niche technical questions. Fortune covered the deployment; Fortune did not interrogate the usage data. If I were Will Rehrig’s peer, this is the first question I’d ask him over coffee. The move is still the right move even if the adoption curve turned out to be slower than they hoped — the binders don’t get un-digitized — but the honest state of the evidence is that we know what they built and we don’t know yet how much of the organization uses it.

The TensorIoT platform’s business outcomes are similarly described in case-study terms: improved operational visibility, real-time customer notifications, AI-driven supply chain insights. No dollar figures. Case study prose. Take it at the level it’s offered.

The Craft of AI read

Here’s where I think this story actually lives. There are four decisions inside the Rehrig move that I would steal — in this order — if I were sitting in a $200M traditional-industry CEO chair with a blank AI roadmap.

Decision one: index your tribal knowledge before you index your customers. This is the one that gets me. Every AI vendor in the world is going to walk into your office and pitch you a customer-facing chatbot. The Rehrig story is a quiet argument that the highest-return first GenAI deployment for a traditional-industry operator is the one pointed at your own retiring workforce. Your senior engineers, senior operators, senior sales reps — the people who carry institutional memory in their heads — are a depreciating asset. A GenAI corpus built from their documentation, their procedures, their recorded training sessions, and their trouble tickets is an appreciating one. The customer-facing AI can come later. The people who hold the keys to your operation cannot be re-hired once they’re gone. If you are a founder-CEO of a manufacturer, distributor, specialty industrial, or family-held traditional business — your first AI project should probably not be a chatbot on your website. It should be a chatbot on your engineers’ phones.

Decision two: you are not the team that is going to build this — hire specialists for every layer of the stack. Rehrig partnered with TensorIoT on the data platform. They partnered with AWS on the managed services. They brought in outside expertise on the Vision Object Recognition build. Look at the pattern and miss nothing: at every layer where the work required deep current expertise in AI infrastructure, IoT telemetry, or computer vision, Rehrig brought in people whose day job is that specific discipline. I want you to read that sentence twice, because the failure mode I watch traditional-industry CEOs fall into over and over is the belief that they can scale their existing engineering team into an AI team by hiring two more people and reading some blog posts. You cannot. The AI talent market is the most competitive talent market in the world right now, and you are not going to out-hire the specialists, you are not going to out-learn them in eighteen months, and you should not try. Your in-house engineers are valuable for the institutional knowledge they hold about your operation, not for their ability to ship production ML infrastructure in a quarter. Bring in specialists at every layer. Pay what it costs. The dollars you think you’re saving by trying to go it alone are the most expensive dollars you will spend this decade.

Decision three: stop building decorations, start building actions. The public Rehrig case study celebrates the move from React dashboards to QuickSight. I want you to see through that. The real lesson — the one that matters if you are making your first AI investment right now in 2026 — is that the entire category of “stand up a BI dashboard so a VP can look at numbers” is a legacy frame. Dashboards look useful to a COO who grew up in the era of weekly business reviews. They are not where AI belongs in 2026. The modern version is information that tells a specific person what to do next, at the exact moment they need to do it, at every level of the organization. Think about a regional airport. The old model is a dashboard on the ops manager’s monitor showing de-icing truck locations, gate statuses, and on-time percentages. The new model is a text message to the specific driver — “move truck 4 from gate 2 to gate 8, flight 1147 is stalled and the crew is waiting” — and, simultaneously, a note to the ops manager — “shift the 10 a.m. team huddle to 2 p.m., the morning push is running forty minutes behind on de-icing and your presence on the ramp is worth more than the meeting.” Same underlying data. Radically different output. One tells you the state of the world. The other tells the right person, at the right level, what to do about it — and in most cases, does it without waiting for them to log into anything. The question I want you holding when the next vendor walks in with an AI dashboard pitch is this: is the output of this system a screen somebody has to look at, or a specific instruction delivered to a specific person at the moment the instruction matters? Build the second thing. The first thing is the 2015 version of AI dressed up in a new logo.

Decision four: follow the labor, not the elegance. Rehrig deployed Vision Object Recognition in direct-store-delivery warehouses before they deployed it on the manufacturing floor. From a pure AI-interestingness perspective, the manufacturing floor is the more elegant problem — defect detection, equipment health, process optimization, the stuff that gets standing ovations at AWS re:Invent. But the DSD warehouse was where the labor was, where the customer pain was, and where the money was. They went there first. If you are tempted to point your AI investment at the most technically interesting corner of your business, ask yourself where 80 percent of your labor cost actually sits and point the investment there instead. The interesting problems can wait. The expensive ones cannot.

If I had one critique of the Rehrig execution, it would be this: the public story is too clean. Fortune wrote it as a smooth success. AWS wrote it as a smooth success. TensorIoT wrote it as a smooth success. I guarantee you it was not a smooth success — no 112-year-old manufacturer digitizes 14,000 pages of technical documentation, migrates to the cloud, stands up a real-time data platform, and deploys computer vision in warehouses without a pilot that stalled, a model that underperformed, a scope that got cut, or a plant manager who hated the whole thing for six months. Those stories are the most valuable part of any operator transformation, and they’re exactly the stories that don’t make it into the case study. The thing I’d ask Will Rehrig, if he were reading this: which of these four moves was the hardest, and what did it cost you to get it through. The answer to that question is worth more than the case study.

Things to consider

Five questions the Rehrig story should provoke, if you’re a founder-CEO of a $100M–$500M traditional business:

  1. How much of your operational knowledge is locked inside employees who will retire in the next five years? If the answer is “a lot,” and most of you don’t actually know because you’ve never counted, a GenAI corpus built from their documentation and recorded expertise is a higher-ROI first move than anything your CMO is pitching you.

  2. Where in your tech stack are you paying custom-build prices for commodity output? Dashboards, internal tools, CRM extensions, reporting pipelines. Make the list. Move the top three to managed services. Use the reclaimed engineering time on the places where your domain is the moat.

  3. Are you building infrastructure you should be buying? Every mid-market CEO I know has at least one engineering project that is quietly becoming a bad clone of a thing you could license. Find yours. Kill it. Your engineers will thank you in six months.

  4. Where is 80 percent of your labor cost actually sitting? Point your AI investment there. Not at the interesting problem. At the expensive one.

  5. If your CTO or head of ops disappeared tomorrow, what percentage of your company’s operational memory would go with them? If that number makes you uncomfortable, you know where to spend your first AI dollar.

THE WORKBENCH

Two versions of the same work. A small one you can do Tuesday morning. A rigorous one we do for clients.

The Tuesday morning version

Take thirty minutes and do this exercise yourself, before you talk to anyone.

Print one question at the top of a blank page: What do my senior engineers, operators, and frontline leaders know that isn’t written down anywhere? Then call three of them — the ones closest to retirement, the ones with the most scar tissue, the ones whose departures would hurt the company the most. Ask that question directly. Write down what they tell you. Do not correct them, do not interrupt, do not rush them to the next thing on your calendar. Let them talk.

You are not going to build a GenAI corpus this week. You are going to build the first thing a GenAI corpus needs, which is an honest accounting of the institutional memory you’re about to lose. Every answer on that page is a future chunk of a future knowledge base. Every name on that list is a person whose knowledge is an appreciating asset while they’re on the payroll and a depreciating one the moment they’re not.

The exercise costs nothing and takes less than a morning. By the end of it, you will know more about where AI actually belongs in your business than ninety percent of the CEOs who paid a consulting firm for a 60-slide deck last quarter — because you will have started from the thing that is at risk in your operation, instead of the thing that is trendy in the press.

The rigorous version — The Ground-Up AI Model

Here is where I have to be direct with you, because the Tuesday morning version is a taste, and a taste is not a deliverable.

What we do for clients at The Craft of AI is called The Ground-Up AI Model, and it is the professional version of what I just described. Every AI strategy you have been sold starts in the boardroom. This one starts on your floor. We spend five structured days inside your operation — on the plant floor, in the warehouse, in the customer service pod, on the ride-alongs with your drivers — running design-research interviews and observation sessions with the people who actually run the work. Twenty to thirty conversations. Structured prompts. Coded themes. The kind of synthesis that cannot come from three phone calls over a morning.

What you walk out with is a target operating model for your business that has AI built in from the ground up — not bolted onto the side of the org chart, but embedded in the places where your own people told us it belongs. And alongside the model, a 90-day execution roadmap your team can run without us. No permanent consultant on retainer, no open-ended engagement. A clean deliverable you own.

The investment is $10,000. The reason I name the number here is that I want you to understand what it is buying: the rigorous version of the work, not a taste of it. Founder-CEOs of $100-million to $500-million businesses who run the Tuesday morning exercise and realize they want the full picture are exactly the people I built this for.

You can run the exercise yourself this week. You should. And when you finish and you are looking at a page of answers and realizing the gap between what you now know and what a real operating model requires — that is the conversation we should probably be having.

THE QUESTION

The question I want you to sit with, and then answer me:

If I walked into your company tomorrow and asked your three most senior operators what they know that isn’t written down — what would they say, and would anyone be writing it down?

Hit reply, or send a note to grant@thecraftofai.com, and tell me. Two reasons you should:

First, I’m collecting answers for the next issue — specific knowledge-loss stories from founder-CEOs in this audience. Names stay confidential unless you tell me otherwise. The more specific the answer, the more useful the next issue will be for every other reader.

Second, if after writing out your answer you’re realizing the gap is bigger than a Tuesday morning exercise can close — if the honest version of your answer is something like “I have no idea, and I’m afraid of what I’d hear” — that is the exact conversation The Ground-Up AI Model is built for. I run a small number of these engagements each quarter with founder-CEOs in the $100-million to $500-million range. Reply to this email, tell me what you’re staring at, and we’ll talk about whether the workshop is the right next step for your business. No pitch deck, no discovery call theater. Just a direct conversation about whether the work is a fit.

— Grant K. Baldwin grant@thecraftofai.com

Get The Brief in your inbox.

Bi-weekly deep dives on how founder-CEOs of $100M–$500M operators are actually shipping AI. One story, no scanner filler, reply to Grant directly.