• The Complexity of the COBOL Problem

    The Complexity of the COBOL Problem

    IBM’s stock dropped 13% the day Anthropic announced a COBOL translation tool. The COBOL crisis is significant, but a translation tool is not the entire solution. It’s a part of a larger effort to preserve the efficacy of these critical systems while also having qualified operators.

    Jeana Bolanos  ·  President, SalesE 


    When Anthropic published a blog post about an AI tool capable of translating COBOL into Java or C++, IBM’s stock fell 13% in a single session. The market had read the headline and drawn a conclusion: the COBOL problem had a path to resolution. An old language that can be into a new language.

    That conclusion was wrong. And the organizations running COBOL systems (think banks, insurers, government agencies, distributors and manufacturers) whose daily operations depend on them understand why.

    The COBOL problem is not a language problem. In fact, it’s the intricacies of the language that have kept it around and functioning in the most critical facets of our lives. It’s a challenge of scale and operational complexity.

    The challenge of scale and retirement

    COBOL is the operational backbone of the global economy. Seventy percent of the world’s critical business data runs on COBOL systems. Ninety-five percent of ATM transactions touch COBOL. More than $3 trillion in daily commerce flows through systems written in a language that was designed in 1959.

    70%

    of critical world business data runs on COBOL systems

    58

    Average age of a COBOL developer, per IBM

    These systems are still running because they work with a reliability that most modern systems cannot match. IBM data shows COBOL mainframes achieve 99.999% uptime. That is roughly five minutes of unplanned downtime per year. For a bank processing millions of transactions daily, that is necessary.

    There is no problem with the technology, the challenge is the number of people who understand it. According to IBM, the average COBOL developer is 58 years old. Roughly 10% of the remaining workforce retires every year. There are no meaningful educational pipelines replacing them. When those developers retire, they take with them something no translation tool can recover: a deep, intuitive understanding of how the system operates.

    When the last COBOL developer retires, they take decades of undocumented system knowledge with them.

    Why “just convert the code” misses the point

    Most people assume that translating COBOL into a modern language like Java or Python is fundamentally a language problem. Swap the syntax, keep the logic.  But it’s significantly more complex than that. At a scale that is difficult to comprehend. COBOL and modern languages are built on different assumptions about how computers handle numbers, memory, and data. In financial and transactional systems (like distribution), this would be the difference between a system that works and one that works well enough to hide very small variances (which at scale would become very large).

    Start with math. COBOL was designed from the ground up for business arithmetic. Every number in a COBOL system is stored and calculated exactly as it appears ($1.10 is $1.10) not a binary approximation that rounds differently. Java’s default numeric types use a different approach that cannot represent many common decimal values exactly. Think of it like trying to write 1/3 as a decimal.  You can’t do it precisely, you just get 0.333333 going on forever. Across millions of calculations, those tiny imprecisions accumulate. Java has a workaround, but it must be used deliberately and consistently by every developer who touches the code. Miss it once and you have introduced an error that would not appear in any error log. It would produce slightly incorrect numbers.

    When developers at the IRS attempted to rewrite a critical COBOL system, the COBOL programmers told them plainly: the new code couldn’t do the calculations right.  This example is best case, where the IRS identified the deficiency prior to making any changes. Anyone in the distribution space has seen many companies move away from COBOL systems and migrate to newer systems and in the process, lost months of revenue, man-years of time invested, not to mention millions of dollars in platform and consultant fees.

    Then there is predictability. COBOL allocates memory in a fixed, predetermined way. The system behaves identically every time it runs. Modern languages like Java manage memory dynamically, periodically pausing to clean up unused data in a process that cannot be scheduled or predicted. In a system processing hundreds of thousands of transactions per hour, an unplanned pause is a real operational risk that does not exist in COBOL.

    Combine math and predictability with 30 or 40 years of production operation and you have a system that is extremely challenging to replicate. Every edge case a COBOL system has ever encountered has been handled correctly and is embedded deep in the code. None of which is typically written down in operating documentation. It is encoded in the behavior of a system that has been tested by decades of transactions.

    Finally, COBOL systems don’t run in isolation. They sit at the center of ecosystems that feed downstream reporting tools, receiving upstream inputs, producing output files that other applications have read in the same format for twenty years. A translated system that produces subtly differently formatted output can send downstream errors to the systems that depend on it.

    This is why code conversion is an interesting capability and an incomplete solution. The language is not the problem. The problem is that most organizations cannot fully describe what their COBOL system does in day-to-day operations.

    A path to a solution

    The best place to start is characterization. Build a complete, validated understanding of what the system does before deciding what to do with it. That understanding is the asset. The code is just the current container.

    At SalesE, we call this the Legacy Intelligence Framework. It has four phases, each a prerequisite for the next.

    SalesE  Legacy Intelligence Framework  —  Four Phases
    Phase 01
    Characterize Exhaustive test coverage maps the static system. Every input, output, rule, and exception is documented with no change to the code base.
    Phase 02
    Validate Operational testing confirms the system’s behavior against documented expectations. Edge cases and undocumented logic are surfaced and recorded.
    Phase 03
    Map A complete functional code map is produced.  Full documentation of processes, dependency, and data flow. The system is now fully legible.
    Phase 04
    Transition With a fully characterized system, the organization chooses its path: migrate to a new platform, convert the code, or continue operations with superior and trainable material.
    The outcome: a system your organization fully understands, owns, and can act on.

    The fourth phase is objectively the most exciting, but without the first three the path to success is unreliable. With a fully characterized system, migration and code conversion become realistic options for a more modern system. AI translation tools become genuinely powerful when you can run a translated version in parallel, validate its output against a documented baseline, and promote it with confidence. Or you may find that the right answer is neither migration nor conversion. For some organizations, a fully documented system that a less specialized team can operate without deep COBOL expertise may be the most valuable outcome available. It preserves continuity, eliminates single-point-of-failure dependency, and creates the breathing room for a deliberate, phased, risk-reduced modernization strategy.

    The market reaction to Anthropic’s tool was not entirely wrong. AI-assisted translation is a meaningful development. It will matter. But it will matter most to the organizations that have already done the hard work to hand a fully characterized system to a translation tool and validate the output within a tightly controlled system for evaluation.

    Download the SalesE Legacy Intelligence white paper
    A practical guide to characterizing, validating, and mapping COBOL-dependent systems and building a clear path forward to efficiency and automation.
    Visit SalesE to download the white paper
  • You Already Have the Data. You’re Just Not Using It.

    You Already Have the Data. You’re Just Not Using It.

    Distributors and manufacturers are sitting on years of untapped business intelligence. It’s in your inbox.

    Jeana Bolanos  ·  President, SalesE 


    When executives talk about AI, the conversation usually arrives at some version of: “We don’t have enough clean data.” It’s a common (and important) reason companies put off AI projects.

    There are two important classifications of data, structured and unstructured. At a high level, think of structured data as numbers and descriptions that can be organized in a spreadsheet. Unstructured data are things like emails, faxes, PDFs, logs, etc.  This unstructured data is what we are overlooking.

    It’s in the email your inside sales rep sent three months ago when a customer pushed back on pricing. It’s in the customer service ticket where a buyer mentioned they were evaluating a competitor. It’s in the sequence of orders that quietly shrank from weekly to monthly before the account went dark. It’s in the chat message where a customer asked if you carried something you don’t stock (three times in one quarter).

    This is unstructured data. It doesn’t live in rows and columns and it won’t show up on your dashboard. But it contains actionable intelligence and nuance regarding your customers, pricing, and demand patterns in a way that your structured systems just can’t capture. And for distributors and manufacturers, it’s market intelligence that can make your team more effective.

    So you definitely have the data, but what are you doing with it?

    What unstructured data contains

    Structured data tells you what happened. Unstructured data tells you why and what’s about to happen next.

    Your ERP knows a customer ordered 200 units last quarter. Your email knows they asked whether you could do better on price if they committed to 500. Your ERP shows the order. Your email shows the negotiation that shaped it, the hesitation that preceded it, and the alternative they were considering.

    Most companies have years of this sitting in inboxes, customer service platforms, and communication logs, largely unsearchable, typically unanalyzed, and very disconnected from any business decision. The average distributor processes thousands of customer interactions per month. Almost none of that valuable insight makes it into strategic and daily business decisions.

    Five places the intelligence is hiding

    Pricing signals in customer conversations

    Every time a customer pushes back on a quote, asks for a discount, or mentions a competitor’s price, they are giving you pricing intelligence. Aggregated across hundreds of conversations, those signals tell you where your pricing is creating friction, which customer segments are most price-sensitive, and where you have room to hold margin without losing the deal. 

    Demand patterns in order communication

    Order emails, purchase confirmations, and reorder requests contain timing and volume patterns that your ERP captures incompletely. A customer who historically emails to ask about lead times before placing a large order is signaling intent before the order exists. A cluster of customers asking about the same product category in the same two-week window is an early demand signal. These patterns are invisible in structured data but readable in communication logs if you’re looking systematically.

    Churn signals in service interactions

    Customer churn rarely happens suddenly. It builds. A customer who used to call with questions stops calling. Complaint frequency increases before order frequency decreases. Response times to your team’s follow-ups get longer. These behavioral shifts show up in your service logs and email threads before they show up in your revenue numbers. Catching them early means you have time to act. 

    The expertise that leaves with people

    Every distributor and manufacturer has them. The inside sales rep who has been there for twenty-two years and knows, without looking anything up, that a particular customer always orders heavy in Q3 because of their own seasonal cycle. The customer service manager who can tell from the way a complaint is worded whether it’s a one-time frustration or the beginning of a serious relationship problem. That kind of expertise is built through thousands of interactions and it lives in the communication history.

    This institutional knowledge is one of the most valuable competitive assets most companies have and it is almost entirely undocumented.

    Natural language access to your data

    Once your unstructured data is organized and queryable, your team and your customers can ask plain-language questions and get answers in seconds. A customer service rep can type “what did we promise this customer about lead times last quarter” and get an answer drawn from log or email history. A sales manager can ask “which accounts haven’t reordered in 90 days but were ordering monthly before that” and get a list. A customer can ask “can I get a price for 10 units of the blue widget that I bought last year” and get an accurate quote for the correct part.

    Why this is more achievable than you think

    Modern, intelligent tools can read and organize unstructured data without requiring it to be perfectly structured first. Your email doesn’t need to be in a data warehouse to be queryable. Your customer service logs don’t need to be reformatted to be analyzed. The technology has matured to the point where the starting point is your existing data, in its existing state, in your existing systems. No platform migration, no new tools.  

    The realistic starting point for most mid-market distributors and manufacturers is a single use case. Pick the one where the pain is clear. Build something that works with your current data. Prove the value. Then expand.

    You don’t need a perfect data strategy before you start. You need a specific problem and a willingness to look at data you’ve been overlooking.

    A couple simple ways to start

    Coordinate with IT to stop deleting email accounts 1 year (or predetermined timeframe) after an employee leaves. Keep them as long as possible.

    Formalize customer service logs and increase retention time.  If you aren’t systematically capturing customer service conversations, start now. If you have a retention time for these records, extend it. These logs are worth their weight in gold.

    The data is already there. It’s been accumulating in your systems for years. The only question is whether you put it to work or to leave it in the inbox.

    *Before taking any action, ensure that your actions and policies align with GDPR guidance and comply with your company data privacy and security policies.

    Start with the data you already have
    Want to explore what this looks like for your business? The intelligence is already there — in your inbox, your service logs, and the heads of your longest-tenured employees. Let’s talk about how to put it to work.
    Visit salese.com to learn more
  • Whitepaper: Legacy ERP Characterization

    Whitepaper: Legacy ERP Characterization

    A Structured Framework for Documenting, De-risking, and Modernizing Around Legacy ERP Systems

    Joshua Bone

    Chief Technical Officer — SalesE

    Carl Hewitt

    Technology Architect — SalesE

  • 70% of the World’s Mission-Critical Data Still Runs on COBOL

    70% of the World’s Mission-Critical Data Still Runs on COBOL

    If your organization still runs core operations on COBOL or mainframe systems, you’re not behind nor are you alone.

    For decades, COBOL has quietly powered the most important systems in the world. Today, an estimated 70% of mission-critical business data and transactions still run through COBOL-based systems, including banking, insurance, government services, airlines, and global supply chains.

    $3T per day in commerce processed in COBOL systems

    95% of all ATM transactions executed via COBOL

    1.5B new lines of COBOL code written every year

    1.5M banking transactions per second supported with 99.999% uptime

    These systems have survived the test of time because they are stable, secure, and exceptionally good at what they were designed to do: process massive volumes of high-value transactions with near-perfect reliability.

    Yet many executives feel pressure to modernize, which might sound like a full system migration.

    If your company still runs on COBOL, your core business logic likely lives in decades old COBOL code, institutional knowledge is embedded in systems that no one wants to touch, and rewrites are expensive, disruptive, and prone to failure (understatement of the century, I know).

    So how do you safely extend what already works while safely utilizing the advancements in software systems, automation and AI?

    Modernization of these systems requires unlocking the data.  This allows companies to expose COBOL logic through API or RPA, stream data to the cloud (when and if appropriate), and automate accessible processes (either in the cloud or on-prem). You can also introduce functional UI’s and dashboards without touching core code while adding guardrails, observability, and governance.

    This creates a hybrid architecture where COBOL remains the system of record, the cloud holds the intelligence and your team stays in the loop for high-impact decisions

    All without the pain, risk and expense of a multi-year migration.

    So if your core systems run on COBOL, you are not behind, you are not alone, and you do not need to migrate to modernize.

    Build intelligence around what already works with the same stability and reliability that you have come to expect from your COBOL system.

  • Don’t Migrate Your ERP

    Don’t Migrate Your ERP

    There are many distributors and manufacturers that are using AS-400, equivalent legacy ERPs, or home-grown systems. And if you are still using these systems, it’s because you understand the pain and risk of transitioning an ERP. You have seen the disasters, you have seen people get fired for the mismanagement of this incredibly complex operations evolution.

    Your patience has paid off and your moment has arrived.

    The recent advancements in the speed of software development have now made it more economical and reasonable to keep your legacy ERP system, connect your data to an external intelligent dashboard, and automate operations to achieve the efficiency that you need to help your team scale.

    The systems that historically have been a competitive advantage for distribution are now holding back the potential that lives within them.  An area that is ripe for improvement is the manual handoff between systems. Every distributor has people that spend a large part of their day looking up data in an ERP or CRM to type into another system, report or email. Critical decisions are being made on fragmented data and communication is limited to what a person sees on the screen. Our people are our superpower. Giving them a full picture of the complete data set and the bandwidth to focus on making complex, nuanced decisions is what will drive revenue and reduce cost.

    Distributors already have the data they need to operate intelligently. It’s not a matter of getting a new or different system so you can have better data.  Activating the data you already own in a way that is uniquely meaningful to your team is the way forward, and a new tool or system migration is not the answer.

    Once your systems are connected, you can automate tasks and operations and route data to the correct people to make decisions.  You can create comprehensive data-driven strategies, have configurable dashboards that give teams the information that they need, and track effectiveness in real time. Building a system like this is best done in small steps, starting with repetitive actions that are low risk.  And then build one automation, one connection, one report at a time. The intelligence and efficiency will compound as you connect more systems, automate more tasks. And the guardrails can apply to all operations but you can also customize rules that are system specific or task specific to really dial in the effectiveness.

    As in most areas of life, restraint and specificity will drive success.  And with the most recent developments in technology and the speed of iteration, what would have taken months or years to implement historically can be configured in days and weeks. The bottleneck used to be how fast developers could code and this is no longer the case.

    Market changes are in full effect and many leaders are worried about getting left behind. Don’t jump into a new system or tool thinking that it will insulate you from the changes that are occurring.  The pace of change is the exact argument against investing hundreds of thousands of dollars and a year of pain to transition to a new system. Use what you have. Connect your systems. Extract your data. Write your rules of operation. Automate what makes sense. Add intelligence only where needed. And you can customize exactly what that looks like for your company without the custom software pricetag or time investment of the past.

  • Resource War: The Battle for Memory

    Resource War: The Battle for Memory

    For decades, DRAM shortages followed a predictable script: overshoot, crash, recover, repeat. Demand spikes would hit, manufacturers would add capacity, and the market would flood with chips again.  This time is different.

    At first glance, the global DRAM shortage may look like a typical semiconductor cycle, but it is actually one of the first pain points that we are seeing as AI collides with physical resource limits. Consumers, enterprises, and governments are already on the rollercoaster.

    More important than the demand and supply imbalance, there are other factors like strategic capacity control, long term AI demand (that is just ramping up), and geopolitical constraints all converging at once. And it’s the first time AI is materially crowding out everyone else.

    When DRAM tightens, PCs, laptops, phones, and tablets get more expensive.  Automotive electronics systems face supply risk and I think we all remember what that looked like post-COVID. DRAM access becomes more unequal because priority (and manufacturing nodes) go to hyperscalers, leaving everyone else to fight over the remaining capacity.  This means that consumer prices on all electronics will rise because the companies that make them are paying 2x, 3x, 4x more for the memory chips that allow them to work.

    Samsung, SK Hynix, and Micron control ~95% of the DRAM market and they optimize profitability to focus on high margin node migration. A disproportionate number of manufacturing lines go to the newest technology that is used for AI, so the manufacturing capacity that is left is not enough to supply everyone else who needs DRAM for their products (which is basically everyone making any technology hardware product).  So prices rise dramatically as companies fight over the available chips. What is different about this cycle of shortages is that it’s not just the lagging technology nodes that are not enough to supply demand, the companies buying up DRAM for expand their AI capabilities can’t get nearly enough to support their build plans. This was highlighted by the recent firing of a top tech executive who failed to sign multi-year contracts with memory vendors to lock in supply.  These chips will affect the winners and losers in the AI race because without them, all progress stops.

    Another interesting aspect, which is typical of every cycle, is that bringing up a new node technology has low early yield (for every 100 chips manufactured maybe only 75 of them work).  As the technology matures, these numbers improve but that takes months to years to get them where the manufacturing capacity reaches an optimal level. So the more the technology transitions, the lower the overall yield of the factory.  And the other contributing factor is the cost of a new FAB.  Building additional manufacturing capacity takes 7-10 years and around $50B. These are strategically planned and advertised for years before they become reality, so there are no fast response options to demand spikes.  And the final consideration here is the EUV technology required to manufacture the advanced nodes that is prohibitively expensive, in extremely high demand, and also controlled by government policy and export controls limiting other potential suppliers from entering the market.  Here, Micron has the key home court advantage.

    The demand curve is also changing, likely for the long-term, not just a short term bump.  GPU’s requiring high bandwidth memory (HBM), DDR5 RDIMM density required for training clusters combined with long qualification cycles locks supply to only a handful of customers. HBM is particularly destabilizing because it requires leading edge DRAM wafers, advanced  packaging capacity and in-demand manufacturing tools (like EUV).

    In conditions like these, memory suppliers stop selling memory and they start allocating supply.  This is also not new and not specific to this AI-driven spike, but what is different is the allocation hitting the largest customers so hard that they are firing executives over it. This is a new level of scarcity.

    Memory has always been the market leading commodity indicator for the semiconductor industry.  I am not totally convinced that this is changing but I do think there is a significant and fundamental shift that has already started because competing for memory is the first battle ground in the AI fight for resources.  The next will likely be water and energy but the global implications of the fight for memory may give us an estimate of scale of what is to come.

  • The Future of Electronics Distribution is in the Cloud(s)

    The Future of Electronics Distribution is in the Cloud(s)

    AI has started to change distribution economics in ways the industry has never experienced. McKinsey estimates that end-to-end AI adoption can reduce inventory by as much as 30 percent, cut logistics costs by double digits, and drive major improvements in service levels and working capital efficiency. Across the broader supply chain, early adopters are seeing significant gains in speed, resilience, and operating margin. And with AI-powered logistics projected to more than triple in market size by 2032, the direction is clear: the next era of distribution will be dominated by companies that can make the leap to operate like expert software platforms and maintain while optimizing their valuable warehouse networks.

    The path to becoming an AI-first distributor begins with the foundational move of structuring all operational data in the cloud, even if the ERP and core systems remain on-prem. Using modern API and event-driven sync technologies, distributors can stream everything (quotes, purchase orders, shipments, transactions, supplier feeds, customer signals) into a cloud database that becomes the company’s first unified, real-time model of its business. This likely will have to be done step by step so it does not disrupt ongoing operations. It will take time and patience but will set the foundation of clean, structured, accessible data from which all future AI systems will be driven.

    Once that cloud data spine exists, distributors can begin deploying specialized, AI-enabled point applications that immediately close capability gaps in their legacy systems. Pricing engines become more precise and governed. Quoting cycles shrink from hours to seconds. PO intake and document processing become automated. Predictive inventory models run independently of the ERP. Codified rules will be able to govern operational workflows with consistency. The hardest part is getting an entire organization to agree on what the operational workflow rules are.  This is where your experts on the ground come in.  Only the people that execute these tasks daily are the ones who can write the rules in such a way that will set up success. But the reality is that these systems will replace their daily functions, and for many (or dare I say most), this creates a lot of fear.  Upskilling, paths to the AI-enabled future, and genuine organizational trust as just as critical to the success of this stage of the transformation as the clean cloud data. These targeted applications function as intelligence layers grafted onto the existing architecture, delivering quantifiable ROI while quietly preparing the organization for a much larger transformation.

    Over time, this incremental shift gives rise to the operating model of the AI-first distributor. In this future state, data becomes the actual system of record, not the ERP. Every operational event is captured in the cloud as a time-stamped, query-able event and rule-bounded AI engines sit on top to optimize decision making and routing. Instead of siloed systems making isolated choices, a coordinated AI-driven orchestration layer determines what to buy, where to place inventory, how to route orders, how to price, and most importantly, when and what to escalate to people that can make nuanced, impactful decisions. People transition from repeatable, transactional tasks to managing policies, exceptions, and relationships, the work that our teams are uniquely equipped for.

    This vision requires cloud-native architecture as both a hosting choice and also an operating philosophy. Microservices, event buses, elastic compute, and API-first integrations create the speed, visibility, and flexibility needed to respond to volatility with intelligence rather than brute force. In this model, the distributor becomes a software platform that also happens to move physical goods. The physical network is still the core competency but the center of gravity shifts so everything orbits with precision around the cloud.

    Digital inventory becomes an engine for growth. When a distributor has a reliable, real-time digital representation of stock, availability, and conditions across suppliers, 3PLs, customers, and global nodes, it can orchestrate far more inventory than it physically owns. This unlocks new business models such as marketplace fulfillment, selective inventory in high-value categories, and the ability to price and promise with unprecedented precision. Growth becomes constrained not by warehouse space or working capital but by how much of the ecosystem’s inventory the platform can see, model, and influence.

    There are a couple potential paths to be the first AI-first global distributor. One is the SaaS platform that steps into distribution by combining cloud-native workflows with behavioral intelligence, then taking selective inventory risk where it can generate leverage. While technologically feasible, this route will struggle due to a lack of deep domain expertise. Distribution is a relationship driven, reputation sensitive, extremely complex ecosystem. Outsiders often underestimate how genuinely difficult it is to scale inside it.

    The second path is an incumbent distributor that successfully shifts to cloud based and data first operation. These companies have unmatched supplier relationships, operational expertise, and physical networks. And many are already experimenting with AI in logistics, warehousing, sales, and operations. Their challenge is not vision. Rather, it is the massive amount of technical, process, and organizational debt. They must standardize data models across business units, move intelligence out of the ERP, and redesign workflows so AI-enabled decision engines operate within clear, compliant guardrails. The success stories here will be those who take these steps early, deliberately, and with long-term discipline, who can implement these changes without disrupting the flow of business.

    The most likely winner is a hybrid: a company that blends the architectural ambition of a software firm with the operational mastery of a distributor. It will start in a narrow vertical where data advantages matter most, build a cloud-native data and events platform as the backbone, layer AI-driven decision engines on top, and maintain a thin but strategically important physical layer supported by a world-class exceptions and relationship team. Once the digital foundation is strong, it can plug into new physical networks globally and expand with less capital than those who have not made the leap.  I envision a possibility where an established distributor separates it’s cloud based operating system as an independent venture that operates like a software company, leverages the physical inventory of said distributor, and creates a broad network of partners to maximize access to physical inventory with minimal operational overhead.

    Begin building your cloud data foundation now, even if your ERP remains on-prem for years. This single step unlocks all subsequent innovation and allows your organization to begin deploying the AI-enabled applications that will define the next decade of distribution. And then decide strategically how you want to participate in the evolving landscape: either double down on physical assets while partnering with cloud-native software platforms, or invest directly in the digital orchestration layer yourself. Either path is viable, what matters most is conviction.

    The first true AI-first global distributor will treat data, intelligence, and decision engines as the core business, orchestrating a vast network of digital inventory while remaining capital-light in physical assets. They will expand through integrations, partnerships, and software and they will redefine how distribution flows.

    I’m excited to see who builds it.

  • Procurement in an AI World: A Field Guide for Distribution & Manufacturing Leaders

    Procurement in an AI World: A Field Guide for Distribution & Manufacturing Leaders

    Procurement is a value engine under extreme pressure. McKinsey finds that managed spend per FTE is up ~50% vs. five years ago, while AI (including “agentic” systems) can lift procurement efficiency 25–40% if embedded correctly. Mature operating models correlate with ~5 percentage-point EBITDA impact. Yet core systems like P2P (Procure-to-Pay), SRM, and e-sourcing are still underused, leaving real money on the table.

    Below are the high-leverage shifts for distributors and manufacturers, especially where BOMs, MRO, long-tail spend, engineering collaboration, and compliance make or break margins.

    1) Redesign how work flows before you add “AI agents”

    What most miss: leaders jump to tools; winners rewire work first.

    • Split strategic vs. transactional work (explicit role design, not just a new org chart). Two-thirds of leaders already segregate these tracks and see cost, on-time delivery, and supplier performance gains.
    • Stand up a Procurement COE that owns method and math, not only tools. Leading COEs codify cost-engineering (e.g., should-cost), analytics standards, and AI/e-sourcing methods; one chemicals firm cut 13% in raw-materials spend by industrializing should-cost.
    • Engineer the “human + rules + AI” interface. Decide where humans approve, where deterministic rules hard-gate, and where AI proposes. Treat this like safety-critical design, not a chatbot bolt-on. McKinsey’s evidence links operating-model maturity—not model novelty—to profitability.

    Why it matters for you: Without these seams defined, agentic AI becomes theater. With them, you convert busywork into governed automation and free scarce talent for category strategy.

    2) Monetize the unsexy: P2P, e-sourcing, and tail-spend hygiene

    What most miss: the ROI is hiding in under-adopted basics.

    • P2P is still under-deployed (only ~60% of large orgs and ~30% of small have it), despite 2–5% cost-reduction potential. If your P2P UX is clunky, adoption dies. Start there.
    • E-sourcing is used by only ~1/3 of orgs, yet a manufacturer achieved 20% savings in the notoriously messy MRO category by making it standard. Tail-spend automation in distribution is often the fastest dollar.
    • Invoice-to-contract reconciliation with AI can expose silent value leakage; one global pharma found >$10M in weeks and renegotiated. If you ship parts, manage spares, or run DCs, this is low-risk/high-yield.

    Action you won’t regret: make P2P + e-sourcing + AI reconciler a single 90-day program with adoption targets, not three tools.

    3) Build “data products” that your agents can actually use

    What most miss: poor signal = “AI agents” that meander.

    Create small, owned “data products” that are stable interfaces for people and machines:

    • Supplier Master (parent/child roll-ups, risk flags, payment terms, ESG attestations).
    • Category & spec taxonomy (parts, alternates, cross-references, criticality codes).
    • Contract clause library (fallbacks and redlines tied to risk/criticality).

    McKinsey’s evidence: analytics and genAI pilots are widespread (~40% have piloted), but value shows up where the inputs are governed and reusable across use cases.

    Distribution-specific win: thread your line-card → alternates → qualified suppliers into one product so agents can propose substitutions that ops and quality trust on day one.

    4) Target non-obvious use cases that move industrial P&L

    Skip the “AI writes my RFP” demos. Prioritize:

    • BOM should-cost + design-to-value with engineering. McKinsey cites 11% cost reduction when sourcing partners directly with engineering. Make it a joint ritual.
    • Policy-driven dynamic buying channels (catalogs with rule-based rails). This is how you harvest the 25–40% efficiency potential from agentic AI without losing control.
    • Supplier-performance “closed loop.” Convert PO exceptions, late deliveries, NCRs, and expedite fees into training data. Then let agents pre-empt risk by recommending alternates with proven OTIF. (McKinsey shows leaders emphasizing partnership and flexibility—not just price.)
    • Contract variance heat-maps. Use genAI to surface clauses driving leakage (payment terms, warranty carve-outs, surcharges tied to commodity indices) and push standardization. (See McKinsey’s genAI-in-procurement guidance.)

    5) Prepare for the hype cycle without becoming a statistic

    Gartner says GenAI for procurement has already hit hype peaks and later the “trough,” with fragmented data, complex integration, and unclear value stalling many programs. They also warn of “agent-washing” and project scrap rates for immature agentic initiatives. Translation: scrutinize claims, bound autonomy, and demand outcome contracts.

    Guardrails to adopt now

    • Bounded autonomy: start with constrained agents (catalog intake, 3-bid-and-buy, variance checks) before negotiation or commitment authority.
    • Red-team your agents: test for supplier hallucinations, clause drift, and bias.
    • Outcome-based vendor SLAs: pay for realized savings/cycle-time reductions, not pilots.
    • Human-in-loop by design: Deloitte’s 2025 CPO survey ties performance to tech and talent, not tech alone.

    6) Build the operating system, not a tool zoo

    From McKinsey’s cross-sector survey of 300+ procurement leaders: the organizations making procurement a strategic driver are the ones that reorganize around it (reporting lines to CEO/CFO/COO, COEs with accountability at CPO level, and center-led category strategy). Then technology amplifies the new design.

    A practical 180-day roadmap (built for distributors & manufacturers)

    1. 30 days – Baseline & backlog: P2P adoption audit; tail-spend map; clause library seed; Supplier Master gaps. (Name 3 categories for impact: MRO, packaging, indirect logistics.)
    2. 60–90 days – Industrialize the basics: Fix P2P UX; switch 3 categories to e-sourcing; deploy invoice-to-contract reconcilers; publish category taxonomies; establish bounded agents for intake and 3-bid-and-buy.
    3. 90–180 days – Scale value work: Launch should-cost playbooks with engineering on 2 BOM families; codify alternates and cross-refs; turn exception codes into supplier-risk signals; tie weekly value dashboards to CFO-visible KPIs (savings, cycle time, OTIF, leakage recovered).

    For distribution and manufacturing, architecture beats algorithms. The durable winners don’t chase every new model; they design procurement as a governed system where humans, rules, and AI cooperate inside the flow of work. That’s how you capture the 25–40% efficiency step-change and convert it into EBITDA.


    Sources

  • The GenAI Divide: How to Turn Pilot Hype into Real Business Impact

    The GenAI Divide: How to Turn Pilot Hype into Real Business Impact

    In July 2025, MIT’s NANDA initiative released The GenAI Divide: State of AI in Business 2025 and its findings should stop every business leader in their tracks.
    Despite $30–40 billion invested globally in GenAI, the study found that ≈ 95% of enterprise pilots deliver no measurable ROI, and only about 5% reach scalable, integrated success.

    Enterprises are experimenting faster than they’re operationalizing.

    What the Data Reveal

    1. High adoption, low transformation
    Over 80% of companies have piloted AI tools, but only a fraction moved beyond proof-of-concept. Success comes not from “trying AI,” but from embedding it into core business systems—ERP, CRM, MES, or compliance workflows.

    2. The real barrier is integration, not technology
    MIT’s research calls this the “learning gap”: most GenAI systems don’t adapt, retain feedback, or plug into decision loops.
    Without domain-specific learning, AI remains surface-level, producing flashy outputs, not measurable gains.

    3. External partnerships double the odds of success
    One of the study’s most practical findings: organizations that partner with specialized vendors see 2× higher success rates than those building internally.
    Why? Vendors bring cross-industry experience, tested frameworks, and governance infrastructure that’s hard to replicate in-house.

    For industry leaders, the MIT study reinforces a truth we’ve long understood in engineering and manufacturing: architecture determines performance. The organizations seeing real ROI are building the systems that allow intelligence to flow safely, consistently, and transparently.

    Ask not “What model should we use?” but “What structure makes the model trustworthy?”
    The winners will be the ones who design AI like infrastructure that is reliable, auditable, and aligned with the business it serves.

    Here’s what you can do now:

    1.  Start small, but start with purpose

    Define 2–3 workflows where AI can remove friction or cost—pricing variance, audit trails, policy modeling, or data reconciliation. Measure before and after.

    • Embed, don’t bolt on

    AI must live inside your workflow. If it can’t interact with your ERP, approval chains, or data lake, it’s a demo, not a solution.

    •  Design for governance and auditability

    The MIT study shows that explainability and traceability predict ROI.
    In regulated industries, trust is not a feature—it’s a requirement.

    •  Choose partners, not providers

    External partnerships outperform internal builds when vendors:

    • Understand your industry and compliance needs
    • Integrate deeply into your operational stack
    • Commit to measurable business outcomes
    • Provide auditable, policy-aware AI guardrails

    The GenAI Divide provides a roadmap for enterprises.  MIT’s research proves that AI success isn’t about model size or spend; it’s about architecture, governance, and human alignment.  The future belongs to organizations that can integrate AI into their everyday decisions with transparency, discipline, and trust.

    Every AI pilot teaches something but not every experiment should become a product.
    The lesson from MIT’s 2025 report is clear:
    Build systems that learn responsibly, operate transparently, and deliver real business value.

    Because the 95% isn’t your destiny, it’s the beginning of a larger story of AI success.

    Jeana Bolanos is the Founder & CEO of SalesE, a Virginia-based SaaS company combining deterministic decision architectures with AI to automate and govern complex sales and operational workflows for enterprise distributors and manufacturers.