Kamalakar Devaki: Advancing Practical AI For Enterprise Workflows

Kamalakar Devaki

Building systems for real-time analytics, sovereign AI, edge intelligence, and secure model training!
Every now and then, a question interrupts the rush of daily life. What if innovation begins long before code or circuits? What if it begins in a quiet moment at home, when a child bends over a simple science kit and a parent wonders how far imagination can take them?

That reflection mirrors the world of Kamalakar Devaki, where personal meaning guides ambition. His son Karthi studies technology with dreams of surpassing Bill Gates and Elon Musk. Kamal smiles and says, “His ambition pushes me forward every single day.”

This sense of purpose fuels his journey.

Grounding Years

Kamalakar began his academic journey with a Bachelor of Commerce, a path many expected him to continue along predictable professional lines. Instead, he made a decisive shift toward computing. Through focused computer courses, he developed proficiency in programming, system architecture, and analytical reasoning.

These early choices shaped his inclination toward applied problem-solving, building systems that translate logic into real-world outcomes.

His professional career began at Tech Mahindra and Mindtree, where he worked as a software architect on large-scale enterprise systems. These roles exposed him to high-availability architectures, security-sensitive deployments, and the realities of building software for mission-critical environments.
Over time, he developed a reputation for architectural clarity and execution discipline. As he once remarked to a colleague, “I will find a way to solve real problems with intelligent systems.”

Exposure to Early Enterprise AI Adoption

During his corporate tenure, Kamalakar closely observed the early adoption of artificial intelligence within enterprise workflows. AI promised automation, insight, and efficiency, but practical adoption often fell short.

Most solutions demanded heavy infrastructure, specialized teams, and cloud dependence. Medium-sized enterprises struggled with cost, latency, and data security concerns. Kamalakar noticed a gap between AI’s promise and its operational reality.

He realized that organizations wanted practical AI, systems that could run efficiently, respect data boundaries, and integrate seamlessly into existing workflows.

Entrepreneurship soon became a natural evolution.

Founding SandLogic: A Practical AI Vision

Kamalakar founded SandLogic in Bengaluru with a clear purpose: to build AI systems grounded in real-world constraints. The company focuses on Generative AI, Edge AI, and AI co-processor chip development, guided by a single philosophy—Adding Intelligence where it truly matters.

From the outset, SandLogic concentrated on problems where intelligence needed to operate in real time, at scale, and within strict enterprise boundaries. Rather than building broad, generic platforms, the emphasis was on speech, language, and decision intelligence that could integrate directly into operational workflows.

Early enterprise adoption validated this approach. SandLogic’s solutions demonstrated measurable impact by improving accuracy, responsiveness, and efficiency in environments where latency, reliability, and data control were critical.

Recognition followed, including the National Startup Award and the Aegis Graham Bell Award, affirming SandLogic’s focus on applied innovation and practical AI deployment.

Advancing Practical AI Beyond Software

As SandLogic matured, Kamalakar’s thinking moved beyond software alone. He began asking deeper questions.

If enterprises depend entirely on imported compute, opaque runtimes, and externally controlled models, can AI ever be truly secure, efficient, or sovereign?

This question shaped SandLogic’s next phase.

He observed that many AI deployments remained constrained by cloud-first assumptions. Latency, power consumption, and data movement created friction—particularly in regulated, sensitive, or resource-constrained environments.

True progress, he concluded, required control across the full AI stack.

Reimagining AI Compute at the Edge: ExSLerate

At the foundation of this expanded vision lies ExSLerate, SandLogic’s AI co-processor initiative.
ExSLerate is designed for environments where real-time intelligence, power efficiency, and data sovereignty are essential. Rather than relying on data-center-scale accelerators, the chip focuses on energy-efficient AI execution closer to where data is generated.

Key design principles include:

  • Support for language, vision, and speech workloads
  • Efficient execution of SLMs and TLMs
  • Ultra-low power consumption
  • On-device and on-prem inference and training

ExSLerate enables deployment across defense systems, healthcare infrastructure, industrial automation, and secure enterprise networks, where cloud dependence is often impractical.

The initiative aligns with India’s Chips-to-Startup (C2S) mission, reinforcing SandLogic’s commitment to indigenous semiconductor capability.

EdgeMatrix: Executing Intelligence Where It Matters

As SandLogic’s capabilities expanded from model development to real-world deployment, a critical challenge surfaced, how to run AI reliably, predictably, and cost‑effectively across heterogeneous environments.

In enterprise settings, AI workloads span CPUs, GPUs, edge devices, and custom accelerators. Models often behave differently across these environments, leading to inconsistent latency, unpredictable costs, and fragile deployments. Inference performance, token generation speed, and infrastructure efficiency become bottlenecks as usage scales.

To address this, SandLogic developed EdgeMatrix, an AI execution and acceleration framework engineered specifically to optimize inference economics and operational reliability.

EdgeMatrix is designed to deliver measurable, production-grade enterprise gains, validated through internal benchmarks and customer deployments:

  • Up to 30–40% reduction in token generation cost through optimized execution graphs and memory reuse
  • Up to 30% lower power consumption by shifting inference workloads from GPU-heavy stacks to optimized CPU, edge, and accelerator paths
  • Up to 2× higher inference throughput, enabling significantly more tokens generated per second on the same infrastructure
  • 30–40% reduction in overall infrastructure cost, driven by lower GPU dependency and efficient resource utilization
  • Deterministic inference behavior, ensuring predictable latency, stable outputs, and repeatable performance across environments
  • Hardware-aware acceleration without requiring model rewrites or architectural changes

EdgeMatrix orchestrates AI workloads efficiently across:

  • CPUs
  • GPUs
  • Edge devices
  • AI co-processors such as ExSLerate

By abstracting runtime complexity while remaining hardware‑conscious, EdgeMatrix allows enterprises to deploy AI systems that scale economically. Models can be executed closer to where data is generated, reducing network overhead while maintaining performance guarantees.

For enterprises, this translates into deployable, reliable AI, systems that finance teams can forecast, architects can trust, and operators can run continuously without surprise cost escalations.

EdgeMatrix reinforces SandLogic’s belief that AI innovation must be measured not just by model capability, but by how efficiently and reliably intelligence operates in production.

Sovereign Intelligence Models

At the intelligence layer, SandLogic began building sovereign AI models, designed to operate entirely within controlled environments.

Shakti Language Models

The Shakti LLM series prioritizes reasoning efficiency, domain adaptability, and low-latency inference over sheer scale. These models support enterprise use cases such as document intelligence, analytics copilots, customer insights, and internal knowledge systems.

Shakti models can be trained, deployed, and operated fully on-prem or at the edge, minimizing hallucination risk while maximizing control.

Svara: Speech-to-Text Intelligence

Svara is SandLogic’s Automatic Speech Recognition system, engineered for real-world, multilingual environments. It maintains accuracy across noisy conditions, accents, and emotional speech patterns.

Svara underpins applications in call analytics, customer experience, and behavioral health, delivering real-time insights while preserving data privacy.

Sruthi: Text-to-Speech Systems

Complementing Svara is Sruthi, SandLogic’s Text-to-Speech system. Sruthi supports natural prosody, emotion-aware synthesis, SSML-based control, and on-device deployment.

It enables accessibility tools, enterprise voice agents, and multilingual interfaces without compromising security.

Behavioral Health and Social Impact

SandLogic expanded its speech analytics into behavioral health through collaborations in Australia and the United States. Real-time speech analysis helps crisis call center executives interpret emotional cues and respond with empathy and accuracy.

These deployments demonstrate AI’s potential to support critical human-centered services where timing and sensitivity matter.

Generative AI in Creative Industries

Using LingoForge, SandLogic applied generative AI to creative domains. For one of India’s largest watch manufacturers, the platform generated design collections aligned with brand identity and user preferences.

This success opened exploration into additional sectors including industrial design, automotive, architecture, and fashion, where generative AI acts as a creative collaborator.

Leadership Philosophy

Kamalakar’s leadership philosophy reflects SandLogic’s engineering DNA—systems thinking, accountability, and long-term execution. His approach is less about hierarchy and more about building teams that think end-to-end, from problem definition to production deployment.

At SandLogic, leadership is grounded in a few core principles:

  • Problem-first thinking, where real-world constraints shape every technical decision
  • Ownership-driven teams, encouraged to think across chip, runtime, and model boundaries
  • Disciplined experimentation, balancing innovation with production reliability
  • Engineering excellence through iteration, not shortcuts

Kamalakar believes that empowerment without accountability leads to noise, and innovation without rigor leads to fragility. Teams are trusted with autonomy, but also with responsibility for outcomes.

In his words:

“Good engineering starts with understanding the problem deeply, its constraints, its risks, and its impact in the real world.”

“I believe in giving teams ownership, not just tasks. When people own systems end-to-end, reliability and innovation follow naturally.”

Toward a Responsible AI Future

Together, ExSLerate, EdgeMatrix, and SandLogic’s sovereign models form a unified AI stack, from silicon to execution to intelligence.

This integration reflects a long-term vision: AI systems that are efficient, secure, and adaptable, serving people where they are, rather than forcing dependence on distant infrastructure.

Kamalakar summarizes this philosophy simply:

“Innovation must serve people. Otherwise, it loses meaning.”

From early curiosity to building chips, models, and platforms, his journey continues, guided by patience, responsibility, and a conviction that well-grounded intelligence can transform industries.

Read the full magazine edition: Click Here