In the rapidly evolving landscape of appliedAI, the difference between a promising demo and an enterprise-grade platform often comes down to one word: scale. As Fortune 1000 companies deploy AI agents to handle millions of customer interactions, they're discovering that not all platforms are built the same. While many conversational AI vendors focus on model selection and prompt engineering, we've learned that the foundation matters just as much—perhaps more—than what you build on top of it.
Our prior automation technology was deterministic and built on Python. This technology served us well but looking to the future, we wanted to incorporate Generative AI Agents with our deterministic automation. This represented some technical challenges around scalability, conconcurrency and asynchronous communications.
That's why I made a strategic decision to rebuild our current Python Automation Framework on Elixir to improve our automation but also expand our Generative AI capabilities.
The Concurrency Challenge
Customer service doesn't happen in neat, sequential order. At any given moment, an enterprise contact center might be managing thousands of simultaneous conversations across web chat, SMS, voice, social media or others. Each interaction requires:
- Real-time context management
- Integration with backend systems
- Dynamic workflow orchestration
- Continuous state monitoring
- Fault-tolerant error handling
Traditional architectures struggle with this reality. We experienced it firsthand with our Python-based system. Despite aggressive optimization, we hit fundamental concurrency limitations that threatened our ability to scale with our customers' growing demands.
Why Elixir Changes the Game
Elixir, built on the Erlang VM (BEAM), was designed from the ground up to solve exactly these challenges. Originally created to power telecom switches handling millions of concurrent connections, it brings several critical advantages to conversational AI agents:
Massive Concurrency: Elixir processes are lightweight. You can run millions simultaneously on a single machine. This isn't theoretical. Our Agent Framework now handles concurrent conversations with the same efficiency whether we're managing 100 or 100,000 interactions.
Built-in Fault Tolerance: In customer service, failures aren't acceptable. Elixir's supervision trees mean that if a single agent conversation encounters an error, it's isolated and recovered without impacting other active sessions. Customers never experience the cascade failures that plague monolithic architectures.
Hot Code Swapping: We can deploy updates to our agent logic without downtime. For enterprises operating 24/7 global contact centers, this capability is transformative.
The Best of Both Worlds: Deterministic Automation Meets Generative AI
Here's where Pypestream’s solutions are truly differentiated: we don't force customers into an all-or-nothing bet on generative AI.
While the industry races to make everything AI-powered, we've taken a more pragmatic approach. Our Agent Framework supports both deterministic automation through visual node graph workflows and generative AI models. And critically, allows them to interoperate seamlessly within the same conversation.
Deterministic Automation: For use cases requiring predictable, compliant and auditable outcomes (e.g. password resets, account lookups, order tracking, regulatory workflows), Pypestream’s node graph model delivers 100% deterministic execution. No hallucinations, no probabilistic outputs, just reliable automation that performs exactly as designed every time.
Generative AI Flexibility: When conversations require natural language understanding, complex reasoning or handling unpredictable customer intents, our framework can invoke LLMs at precisely the right moment.
Hybrid Intelligence: The real power comes from combining both. A single conversation might use deterministic workflows to securely authenticate a customer, then switch to generative AI to understand a complex complaint, then return to deterministic logic to process a refund according to business rules. The customer experiences one seamless interaction, while you maintain control over which parts require the precision of automation versus the flexibility of AI.
This architectural flexibility means our customers aren't locked into a single approach. If your use case doesn't need generative AI, you don't pay for it or assume its risks. If you want to start with pure automation and gradually introduce AI capabilities, the platform supports that evolution. The control is truly up to your needs, your risk tolerance, and your business requirements.
Integration Without Limits: Action Node Functions and Enterprise Connectivity
Enterprise customer service doesn't exist in a vacuum. It requires deep integration with your existing technology ecosystem. That's where our Action Node function technology becomes critical.
Action Nodes (Serverless functions) enable our agents to connect to virtually any system through flexible function calls and API integrations. Whether you're running cutting-edge microservices on Kubernetes or maintaining mission-critical applications on mainframes from the 1980s, our platform bridges the gap.
API Integration: Through Action Nodes, our AI agents can make real-time function calls to any REST API, SOAP service or custom endpoint. Need to check inventory in SAP? Update a record in Salesforce? Validate a policy in a proprietary claims system? Action Nodes handle it natively within the conversation flow.
RPA Integration: For systems that don't expose APIs (e.g. legacy green-screen applications, desktop software or complex user interfaces) our Action Node technology integrates seamlessly with RPA (Robotic Process Automation) solutions. This means even your oldest, most entrenched legacy systems can participate in modern conversational AI workflows.
The combination creates truly limitless integration possibilities. A customer conversation can trigger an RPA bot to navigate a legacy system, extract data, return it to the AI agent, which then calls a modern API to update your data warehouse, all while maintaining context and responding to the customer in natural language.
For enterprises sitting on decades of technical debt, this is transformative. You don't need to modernize your entire infrastructure before deploying AI agents. Our platform meets your systems where they are, whether that's cutting-edge cloud-native architecture or COBOL running on mainframes.
MCP Tools: Giving LLMs Direct Access to Your Enterprise
But here's where the architecture gets even more powerful: Through Model Context Protocol (MCP) integration with our Action Nodes, we've fundamentally changed how AI agents interact with enterprise systems.
Traditional approaches treat system integration as a two-step process:
1) The LLM decides what to do, describes it in a prompt;
2) Then, some external orchestration layer executes the API call and feeds results back for another round of prompting. This is slow, error-prone, and breaks the agent's reasoning flow.
Our Action Nodes expose enterprise integrations directly as tools that the LLM can invoke natively during its reasoning process. A customer-specific function to check inventory in your legacy ERP system, validate a claim against your proprietary business rules engine, or retrieve account details from a mainframe:These become first-class tools the AI agent can use just like it uses language understanding.
The LLM doesn't describe what API to call and waits for a human or orchestration layer to interpret and execute it. Instead, it directly invokes "check_inventory" or "validate_claim" as part of its decision-making process, receives structured responses immediately, and continues reasoning with that data in context. This is the difference between an AI that talks about actions and an AI that takes actions.
For enterprises with decades of proprietary systems and business logic locked behind APIs, this transforms inaccessible legacy infrastructure into a competitive advantage. An enterprise’s institutional knowledge becomes the agent's knowledge, accessible in milliseconds, not meeting cycles.
The Agent Framework: Designed for Agentic AI
Our Agent Automation Framework leverages Elixir's strengths to enable sophisticated agentic behaviors that go far beyond simple chatbots:
Asynchronous Workflows: Real customer problems don't resolve in a single turn. Our agents can initiate long-running asynchronous workflows (e.g. checking inventory, processing returns, coordinating with human agents) while maintaining dozens of other conversations. Each workflow runs independently, with automatic state management and recovery.
Stateful Context Management: Every conversation maintains rich context across channels and sessions. When a customer moves from web chat to phone, or returns days later, the AI agent has full continuity. This state management, powered by Elixir's process model, enables truly personalized service at scale.
Generative AI at Enterprise Scale
Here's where it gets interesting for tech-heads like me: combining Elixir's concurrency model with modern LLMs creates capabilities that neither technology could achieve alone.
Our Agent Framework treats generative AI as one component in a larger orchestration. While an LLM generates a response, the same agent-process can simultaneously:
- Query multiple backend systems
- Validate business rules
- Check compliance requirements
- Update CRM records
- Prepare follow-up workflows
This parallel processing, natural to Elixir's architecture, means our AI agents don't just chat faster, they resolve issues faster. What might take five minutes and multiple handoffs in a traditional system happens in under a minute with our approach.
Battle-Tested at Fortune 1000 Scale
This isn't vaporware. Our Elixir-based platform currently serves some of the largest enterprises in the world, processing dozens of millions of customer interactions monthly. We've proven that this architecture can:
- Maintain sub-second response times under peak load
- Scale elastically with demand (up or down)
- Achieve 99.99% uptime SLAs
- Integrate with complex enterprise systems (mainframes to microservices)
- Meet stringent security and compliance requirements
The Competitive Moat
As the conversational AI market gets crowded, we're seeing competitors focus on two strategies: either chasing the latest model capabilities or racing to the bottom on price. Both miss the point.
Enterprise customers don't just need smart AI, they need AI that works reliably at their scale, integrates with their existing systems, and delivers measurable business outcomes. That requires architectural decisions made years before competitors recognize the need.
Our Elixir foundation is that kind of decision. While others are now discovering their scaling challenges, we're focused on building increasingly sophisticated agent capabilities on a platform designed for the demands we knew were coming.
What This Means for Our Customers
For the enterprises Pypestream serves, our Elixir strategy translates into tangible benefits:
- Faster Resolution: Asynchronous workflows and parallel processing mean customer issues resolve in minutes, not hours
- Higher Capacity: Handle 10x the conversation volume without 10x the infrastructure
- Better Reliability: Fault isolation and automatic recovery keep your customer experience consistent
- Future-Proof: As our client needs grow, our platform scales alongside with no painful migrations required
Looking Ahead
The future of customer service is appliedAI systems that can reason, plan and act autonomously to resolve customer needs. But agency without reliability is just chaos. By building on Elixir, we're ensuring that as our agents get smarter, they also get more dependable. That's the foundation enterprises deserve. We're not just building conversational AI agents at Pypestream.
My team is building the platform that will power customer service for the next decade.



.webp)