Explore our latest blog posts and articles.
Rise of AI Agents
Exploring the Evolution of AI Agents: Deployment as an Operating System Prepared by Eshwar Potnuru | AIM Inc.

This diagram illustrates the evolution of AI agents in document processing and highlights the following five stages:
- Basic LLM (Large Language Model) Processing
- LLM Processing Combined with RAG (Retrieval-Augmented Generation) and Tools
- Multi-Modal LLM Workflows
- Advanced AI Architectures with Memory Capabilities
- Future Agent Orchestration (Coordinated Control of Multiple Agents)
Based on Hypatos’ insights on “intelligent document processing,” we can see that the ability to handle complex, unstructured data like invoices has improved dramatically.
Moreover, multi-modal LLMs (shown in the diagram) integrate text, images, and other data types, enabling applications such as automated back-office tasks and real-time business insights. Research on Medium indicates that models like NExT-GPT can generate outputs spanning text, images, and video, underscoring the critical importance of this technology for business use cases.
Table of Contents
Section 1: Hypotheses and Speculative Answers
- Definition of AI OS Architecture
- Core System: AI OS Kernel
- Modular AI Subsystems
- Distributed and Decentralized Processing
- Human–AI Interaction Layer
Section 2: Research Findings
- Architecture in a World Where AI OS Exists
- Technical Gaps Between the Current State and the Visionary AI OS
Section 3: How to Bridge These Gaps
- Bridging the Technical Gap: Enabling Nontechnical Founders with AI OS
- Practical Engineering Solutions for AIM Inc. to Overcome These Gaps (Hypothetical)
Section 4: Simulation and Personalized Tasks for AIM Inc. to Catch Up with and Commercialize an AI OS Solution
Conclusion
References
Section 1: Hypotheses and Speculative Answers
Hypothesis 1: AI OS Abstracts Traditional OS Functions into a Unified, Intent-Based Interface
- Question: How does an AI OS manage resources and user interaction compared to a traditional operating system?
- Hypothesis: An AI OS abstracts hardware and software management into an intent-based unified interface. Instead of manually managing files, applications, and processes, users can issue high-level commands—such as “Optimize our revenue strategy”—via natural language or multi-modal inputs. The AI OS autonomously allocates resources, schedules processes, and integrates data to fulfill that directive. For example, if invoice processing suddenly spikes, a predictive model might automatically spin up additional cloud instances, all while maintaining a seamless experience for the user.

Hypothesis 2: AI OS Enables Real-Time, Cross-Departmental Decision-Making
- Question: How does an AI OS integrate data from all corporate divisions and translate it into actionable insights?
- Hypothesis: An AI OS ingests data from finance, HR, operations, and other departments into a centralized knowledge graph. Using a multi-modal LLM, it processes text, images, and structured data. It further employs causal inference models so that, when a supply chain delay is detected, the system cross-references the financial impact and automatically adjusts procurement orders while notifying stakeholders—all in real time.
Hypothesis 3: AI OS Is Built on a Cooperative Ecosystem of Specialized Agents
- Question: How does the agent orchestration layer (Stage 6 in the diagram) function within an AI OS?
- Hypothesis: An AI OS consists of specialized agents for planning, reflection, tool usage, and knowledge synthesis. The orchestration layer dynamically assigns tasks, monitors learning, and facilitates inter-agent communication. For instance, processing 1,000 contracts might involve a planning agent designing the workflow, a tool-usage agent extracting data via OCR, a reflection agent analyzing errors, and a conductor agent (powered by reinforcement learning) coordinating the entire operation.
Hypothesis 4: AI OS Requires Novel Security and Governance Models
- Question: What security and ethical challenges does an AI OS face?
- Hypothesis: Because an AI OS handles all corporate functions, it becomes an extremely high-value attack target. Therefore, it needs autonomous threat detection and self-healing capabilities (e.g., anomaly detection models that quarantine compromised components). Ethically, the AI OS might prioritize efficiency over headcount, risking workforce reductions. To mitigate this, a governance framework incorporating explainable AI (XAI) for transparency and human override mechanisms is essential.

Hypothesis 5: AI OS Democratizes Technology but Exacerbates Skill Gaps
- Question: How does an AI OS impact employees with low technical literacy (e.g., AIM Inc. clients)?
- Hypothesis: By enabling users to issue commands like “Show me quarterly revenue” via natural language or multi-modal interfaces, an AI OS dramatically lowers technical barriers. However, employees who understand and leverage AI principles will outpace those who simply follow the AI’s recommendations. As a result, a significant skill gap may emerge between AI-savvy staff and those who remain dependent on automated outputs.
Hypothesis 6: AI OS Integrates Sentiment and Emotional Analysis to Deeply Understand Human Teams
- Hypothesis: To enable effective leadership, the AI system must analyze team morale, burnout risk, and communication tone and then propose appropriate support strategies. In effect, the AI OS functions like a combined COO and organizational psychologist, continuously assessing and improving team dynamics.
Hypothesis 7: Companies That Do Not Adopt AI OS Will Fall Behind Rapidly
- Hypothesis: Organizations leveraging an AI OS will experience exponential gains in productivity and automation. In contrast, companies reliant on manual processes will be unable to keep pace and quickly lose market share.
Definition of AI OS Architecture
Assuming AI evolves into a fully featured operating system (AI OS), the corporate ecosystem would likely adopt a modular structure as follows:
1. Core System: AI OS Kernel
The AI OS’s nucleus functions like a kernel in a traditional OS but manages enterprise-wide decision-making (strategy execution and corporate governance) and orchestrates various AI modules (finance, HR, logistics, compliance, etc.). Continuously learning from internal and external data sources, this kernel autonomously—and under human supervision—directs corporate activities. In essence, it serves as the “brain” of the AI OS, guiding organizational operations.

2. Modular AI Subsystems
Rather than a monolithic structure, an AI OS is composed of dedicated agents (modules) specializing in distinct domains. For example:
- AI Governance Agent: Ensures regulatory compliance and ethical decision-making.
- AI Strategy Agent: Analyzes market trends and optimizes corporate growth strategies.
- AI Operations Agent: Automates and streamlines business processes.
- AI Communication Agent: Manages PR, customer interactions, and internal messaging.
This design allows the organization to add or remove modules as needed, building a scalable, adaptive system.
3. Distributed and Decentralized Processing
To improve resilience and efficiency, an AI OS would employ:
- Blockchain-Based Verification: Ensures transparency and security.
- Edge Computing: Distributes AI workloads across devices and cloud servers.
- Federated Learning: Enables learning from multiple systems without centralized data aggregation, preserving privacy.
By adopting a distributed architecture, single points of failure are eliminated, and multiple AI components interoperate to create a robust system.
4. Human–AI Interaction Layer
Despite extensive automation, human oversight remains essential. An AI OS includes:
- Natural Language Interfaces: Facilitate seamless communication between humans and the system.
- Trust-Building Mechanisms: Leverage explainable AI (XAI) and audit trails to ensure transparency.
- Emergency Override Features: Allow human intervention for high-risk decisions.
An ideal AI OS acts not merely as an executor but as an “intelligent partner,” collaborating with humans to refine strategy.

Section 2: Research Findings
1. Architecture in a World Where AI OS Exists
Based on Stage 6 (the future AI agent architecture), we design an AI OS from an engineering perspective that simultaneously supports AIM Inc.’s SaaS product development and internal decision-making. The system might be organized as follows:
Agent Orchestration Layer
- Role: Acts as the “conductor” of the AI OS, overseeing task assignment, inter-agent communication, and performance monitoring.
- Implementation Example: A central orchestration agent uses reinforcement learning to dynamically allocate tasks to specialized agents. For instance, in contract processing at AIM Inc., the OCR extraction is handled by a tool-usage agent, the data validation by a knowledge agent, and the entire workflow is modeled as a Directed Acyclic Graph (DAG) for efficient execution. Agents communicate in real time via a message broker such as RabbitMQ.
AI Agent Layer
- Role: Executes specialized tasks—planning, reflection, tool usage, and knowledge synthesis.
- Implementation Example: Each agent is deployed as a microservice—tuned multi-modal LLMs or smaller specialized models (e.g., DistilBERT for text, YOLO for images)—running in containers on Kubernetes. The planning agent might use Monte Carlo Tree Search to propose workflows, while a reflection agent automatically tweaks OCR parameters based on error feedback.
Data Storage / Retrieval Layer
- Role: Manages structured data (relational databases), unstructured data (documents), and extended data (knowledge graphs, vector stores) in a unified way.
- Implementation Example: A hybrid stack: MySQL for structured data, Faiss for vector-based semantic search, and an RDF-based knowledge graph for relationships. In contract processing, the knowledge graph retrieves client details to verify clauses, while RAG references historical data to identify risk patterns.
Input/Output Layer
- Role: Handles multi-modal inputs (text, images, audio) and outputs (reports, action execution, visualizations).
- Implementation Example: Modal-specific encoders process inputs (e.g., Tesseract for OCR, CLIP for image understanding). Outputs leverage generative models (e.g., GPT for text generation, DALL·E for visualizations). For internal decision-making, contract status is visualized on dashboards; for the SaaS product, processed contract data is exposed via APIs to clients.
Integration Layer
- Role: Connects to external systems (client CRMs, AIM’s internal tools).
- Implementation Example: RESTful APIs and webhooks integrate third-party systems. For instance, data from a client ERP is ingested for contract processing, and results are pushed back to the client’s system via API calls.
Overall System Configuration
- The AI OS operates on a hybrid cloud infrastructure with a microservices-based, containerized architecture to ensure fault tolerance (circuit breakers for failover in case of component failure). At AIM Inc., resource allocation (e.g., staff assignments) happens automatically, while a SaaS-based contract-processing service is offered externally.
Figure 6
2. Technical Gaps Between the Current State and the Visionary AI OS
AIM Inc. currently resides at Stage 3 (Multi-Modal LLM Workflows), which includes multi-modal processing, tool usage, and basic memory functions. Below are the primary gaps between Stage 3 and the Stage 6 AI OS vision:
Gap 1: Lack of Agent Orchestration
- Current State: Likely uses a single multi-modal LLM for contract processing, integrating OCR and other tools in a linear fashion.
- Gap: No orchestration layer oversees multiple agents, making it difficult to automate and scale cross-departmental workflows (e.g., contract approval processes).
- Impact: Complex tasks require manual intervention, reducing processing speed and limiting scalability.
Gap 2: Limited Data Integration
- Current State: May use a vector database for RAG, but lacks a comprehensive backbone that integrates structured, unstructured, and enriched data.
- Gap: An AI OS demands a unified data layer that spans relational data, documents, knowledge graphs, and embeddings to support holistic decision-making (e.g., contract trend analysis, risk forecasting).
- Impact: Unable to provide strategic insights, reducing both internal utility and SaaS value.
Gap 3: Weak Reflection and Autonomous Learning
- Current State: Stage 3 includes basic memory, but advanced reflection (e.g., automatic error correction or workflow optimization) is not yet implemented.
- Gap: The AI OS requires reflection agents capable of self-analyzing errors and continuously optimizing parameters. Manual tuning remains the norm.
- Impact: High maintenance costs, increasing client dissatisfaction and support burden.
Gap 4: Insufficient Scalability and Modularity
- Current State: The system is likely monolithic, with tightly coupled functionalities.
- Gap: A microservices-based, modular architecture (e.g., independent OCR, text analysis, and reporting services) is required to enable horizontal scaling.
- Impact: Scaling the SaaS offering or internal usage becomes prohibitively costly and operationally inefficient.

Section 3: How to Bridge These Gaps
Bridging the Technical Gap: Enabling Nontechnical Founders with AI OS
- Definition: An AI OS is a platform that leverages natural language and AI-driven automation so that nontechnical founders can build and manage complex technical systems without full reliance on engineering teams.
- Functionality: It translates business goals into technical tasks and workflows, reduces dependency on specialized engineers, automates infrastructure management, and supports data-driven decision-making.
- Benefit: Nontechnical founders can overcome typical technical barriers during product development, improving the startup’s success rate.
- Example: “Steve,” an AI OS, integrates various tools and services to map business requirements directly to technical implementations.
- Impact: Founders can achieve more autonomy without sacrificing technical rigor, lowering time-to-market and reducing development bottlenecks.

Practical Engineering Solutions for AIM Inc. to Overcome These Gaps (Hypothetical)
Filling Gap 1: Implementing Agent Orchestration
Solution: Develop a lightweight orchestration layer using an open-source tool like Apache Airflow or a custom scheduler. First, decompose existing components (e.g., OCR, text analysis) into discrete tasks. Then build a conductor agent to manage them, and plan to add specialized agents over time.
Steps:
- Model workflows (e.g., contract processing) as a Directed Acyclic Graph (DAG) using Graphviz.
- Implement a task scheduler in Apache Airflow, assigning priorities (e.g., high-value contracts prioritized first).
- Set up RabbitMQ for inter-agent communication and asynchronous data exchange (e.g., send OCR outputs to validation agents).
- Prototype a rule-based conductor agent (e.g., “If OCR confidence < 90%, flag for review”), then roadmap the integration of reinforcement learning for dynamic task assignment.
Resources: 1–2 engineers with Python and Airflow expertise; $500 for cloud compute (e.g., AWS EC2 t3.medium).
Outcomes: Automate multi-step workflows, reduce manual interventions by ~50%. In SaaS mode, increase processing throughput to 500 contracts/hour. Internally, enable cross-functional task coordination between HR and Finance.
Filling Gap 2: Building an Integrated Data Layer
Solution: Create a hybrid data storage system that unifies structured and unstructured data. Use a vector store for semantic search and a knowledge graph for entity relationships.
Steps:
- Use Faiss to index contract documents via Sentence-BERT embeddings, enabling semantic retrieval.
- Build a basic knowledge graph in Neo4j to map relationships such as “Client A → Contract B → Payment Terms.”
- Develop a Flask-based RESTful API to aggregate data from internal systems (e.g., CRM, finance database) and provide rich contextual information.
Resources: 1 engineer with Python and database expertise; $300 for cloud storage (e.g., AWS RDS hosting for Neo4j).
Outcomes: Improve contract verification context accuracy by ~30%. Enhance SaaS features such as trend analysis and internal forecasting.
Filling Gap 3: Introducing Autonomous Learning (Reflection) Mechanisms
Solution: Implement an error logging and self-correction pipeline. For example, log OCR errors, analyze patterns, and automatically adjust parameters.
Steps:
- Integrate Loguru to collect OCR and validation errors into a SQLite database. (1 week)
- Fine-tune a DistilBERT model on error logs to classify and generate correction suggestions. (2 weeks)
- Build a rule-based feedback loop that either applies suggested corrections (e.g., “Increase image contrast for better OCR accuracy”) or flags for human review. (2 weeks)
Resources: 1 ML-capable Python engineer; $200 for GPU training (e.g., AWS EC2 g4dn instance).
Outcomes: Increase OCR accuracy from ~80% to ~96%, reducing error rates by ~20%. Significantly lower maintenance costs and reduce manual oversight for SaaS clients.

Filling Gap 4: Adopting a Modular Architecture for Scalability
Solution: Refactor the monolithic system into microservices using containerization (Docker) and orchestration (Kubernetes) to achieve a scalable environment.
Steps:
- Break the system into distinct components—OCR, validation, reporting—each with well-defined APIs. (2 weeks)
- Containerize each component with Docker, specifying dependencies (e.g., Tesseract) in Dockerfiles. (3 weeks)
- Deploy a two-node Kubernetes cluster on AWS EKS (or equivalent), configuring auto-scaling policies (scale out when CPU > 70%). (2 weeks)
Resources: 2 DevOps engineers with Kubernetes expertise; $1,000 for Kubernetes hosting (AWS EKS, two t3.medium nodes).
Outcomes: Enable the SaaS environment to handle 1,000 contracts/hour, reduce downtime by ~80%. Internally, simplify scaling for HR and other automation use cases.
Section 4: Simulation and Personalized Tasks for AIM Inc. to Catch Up with and Commercialize an AI OS Solution
Figure 9
Below are specific tasks for AIM Inc. to catch up to Stage 6 (the fully realized AI OS) and commercialize it effectively.
Task 1: Build a Lightweight Orchestration Layer
Objective: Automate complex workflows (e.g., contract processing, workforce scheduling) to progress toward Stage 4 (Advanced AI Agent Architecture).
Steps:
- Model workflows (e.g., contract review, HR–Finance collaboration) as DAGs using Graphviz.
- Implement a scheduler in Apache Airflow with priority rules (e.g., prioritize high-value contracts).
- Install RabbitMQ to enable inter-agent communication. (2 weeks)
- Prototype a rule-based conductor agent (e.g., “If OCR confidence < 90%, trigger human review”), and blueprint adding reinforcement learning later.
Resources: 1–2 engineers with Python and Airflow experience; $500 for cloud compute (AWS EC2 t3.medium).
Expected Outcomes: Automate multi-step workflows, cut manual interventions by ~50%. In SaaS mode, increase contract throughput to 500 contracts/hour and facilitate cross-functional tasks internally.
Task 2: Strengthen the Integrated Data Layer
Objective: Build the data backbone needed for Stage 6’s Data Storage/Retrieval Layer, supporting context-aware decision-making.
Steps:
- Use Faiss with Sentence-BERT embeddings to index contract data and enable semantic search.
- Build a Neo4j knowledge graph capturing relationships like “Client → Contract → Payment Terms.”
- Develop a Flask-based RESTful API to integrate internal systems (CRM, finance database) for enriched context.
Resources: 1 engineer with Python and database experience; $300 for cloud storage (AWS RDS for Neo4j).
Expected Outcomes: Improve contract verification accuracy by ~30%, enhance trend analysis and internal forecasts.
Task 3: Implement an Autonomous Learning (Reflection) Mechanism
Objective: Introduce Stage 4 reflection capabilities to reduce error rates and lower maintenance burdens.
Steps:
- Incorporate Loguru to log OCR and validation errors into SQLite.
- Fine-tune DistilBERT on the error logs to classify errors and generate correction suggestions.
- Create a rule-based feedback loop that either applies suggested fixes (e.g., “Increase image contrast”) or flags items for human review.
Resources: 1 ML-savvy Python engineer; $200 for GPU training (AWS EC2 g4dn).
Expected Outcomes: Boost OCR accuracy from ~80% to ~96%, slash error rates by ~20%, significantly reduce SaaS client maintenance costs, and decrease internal operational burdens.
Task 4: Transition to a Modular Architecture for Scalability
Objective: Refactor to a microservices-based architecture to secure scalability and adaptability for Stage 6.
Steps:
- Decompose the system into discrete APIs for OCR, validation, and reporting.
- Containerize each component with Docker, including necessary dependencies (e.g., Tesseract).
- Deploy a two-node Kubernetes cluster on AWS EKS, configuring auto-scaling when CPU usage exceeds 70%.
Resources: 1–2 DevOps engineers with Docker and Kubernetes expertise; $1,000 for Kubernetes hosting (AWS EKS, two t3.medium nodes).
Expected Outcomes: Scale to handle 1,000 contracts/hour in SaaS mode, reduce downtime by ~80%, and facilitate internal expansions (e.g., HR automation).
Task 5: Develop a SaaS API for Contract Processing
Objective: Commercialize AI OS capabilities by launching a contract-processing SaaS product for clients.
Steps:
- Use FastAPI to build RESTful endpoints:
/process_contract
for uploads and/get_results
for retrieving outputs. - Secure the API with OAuth 2.0 authentication via AWS API Gateway to manage client access.
- Build a simple React-based UI that supports voice commands (using Whisper) so that less technical clients can follow guided prompts like “Upload contract.”
- Use FastAPI to build RESTful endpoints:
Resources: 1 engineer with Python (FastAPI) and React experience; $500 for API Gateway and hosting on AWS.
Expected Outcomes: Launch a SaaS product projected to generate an additional $3,000–$5,000 in monthly recurring revenue. The voice-enabled UI can expand accessibility, increasing the potential customer base by ~20%.
Task 6: Automate Internal Non-Engineering Operations
Objective: Use AI OS to streamline internal tasks (customer support, invoicing, etc.), enabling staff to focus on high-value activities.
Steps:
- Integrate with the QuickBooks API to auto-generate invoices based on processed contracts.
- Build a Node.js webhook service that syncs contract status with Salesforce (or another CRM).
- Leverage the knowledge graph to create a decision-making agent that automates staff scheduling based on historical data.
Resources: 1 engineer with Python and API integration experience; $300 for API subscription fees (QuickBooks, Salesforce).
Expected Outcomes: Reduce internal non-engineering workload by ~40%, free up ~10 staff-hours per week, and improve decision-making accuracy by ~25% (in areas like resource allocation and budget planning).
Task 7: Conduct Internal AI OS Training and Maintenance Onboarding
Objective: Institutionalize AI OS operations and maintenance processes to ensure sustainability.
Steps:
- Host a two-day workshop on AI OS fundamentals (workflow monitoring, error log analysis, etc.).
- Create a video-based user guide (e.g., “How to Process a Contract”) targeted at lower-technical staff.
Timeline: Continuous, starting early in the project and repeated quarterly.
Resources: Internal trainers or external consultants; minimal production cost for video materials.
Expected Outcomes: Establish a knowledgeable user base, reduce onboarding time for new hires, and foster a culture of continuous improvement around AI OS usage.
Task Summary & High-Level Roadmap
Overall Timeline: 6–8 months
- Tasks 1–4 (Core AI OS Development): 4–5 months
- Tasks 5–6 (Commercialization & Internal Automation): Next 2–3 months in parallel
- Task 7 (Training & Adoption): Ongoing from month 1
Total Resources: 3–4 engineers, total cloud and tooling budget $2,500–$3,000
- This investment is feasible given AIM Inc.’s current revenue projections.
Key Impacts:
- SaaS Commercialization: Launch a contract-processing SaaS, adding $3,000–$5,000 in MRR.
- Internal Efficiency: Automate 40% of non-engineering tasks, saving ~10 staff-hours weekly; improve staffing and budget decisions by ~25%.
- Client Accessibility: Provide a voice-enabled UI for less technical users, expanding the addressable market by ~20%.
- Scalability: Support processing 1,000 contracts/hour, ensuring readiness for rapid growth.
By implementing these tasks, AIM Inc. can successfully catch up to Stage 6, establish a competitive advantage in AI OS commercialization, and build a future-proof system aligned with Japanese market needs.
Conclusion
Working on this project has been both intellectually stimulating and highly instructive. Examining how AI could evolve into an operating system—capable of thinking, deciding, and orchestrating an entire organization—has been a pioneering exercise for me. Through research and analysis, the vision of AI OS has proven remarkably credible.
An AI OS is not merely a set of sophisticated tools or rapid automation; it redefines how organizations operate, how teams collaborate, and how decisions are made. I am confident that AIM Inc. possesses the strengths needed not only to follow this trend but to help build its foundation.
In this document, we deconstructed the AI OS vision step by step, clarified the technical gaps relative to current capabilities, and proposed actionable measures. Even incremental steps—such as building internal agents or experimenting with agent-based workflows—can lay the groundwork for an AI OS.
I am grateful for the opportunity to explore this future and look forward to continuing this journey with AIM Inc. as we co-create a truly revolutionary corporate operating system.
References
- CB Insights (2022). The Top 12 Reasons Startups Fail. CB Insights Research, December 1.
- Farid, A. (2024). 8 Non-Tech Founder Challenges That Prevent Successful Startups. Upstack Studio, June 15.
- HatchWorks AI (2024). How AI as an Operating System Is Shaping Our Digital Future. HatchWorks, June 6.
- IBM (n.d.). AI Agent Orchestration. IBM Think.
- Instill AI (n.d.). Modularity in AI. Instill AI Blog.
- arXiv (2024). LLM2Code: Understanding the Landscape of LLM-Based Code Generation, arXiv preprint arXiv:2407.14567v1.
- JetRockets (2023). Non-Tech Founders Face These Problems—Here’s How to Solve Them. JetRockets, December 21.
- Ojha, A. (2023). AI-Powered Operating System—A Race to Global Dominance. Medium, December 27.
- ThunderFYC (n.d.). AI OS. Medium.
- W3C (2024). Notes on the Synthesis of Form. W3C Working Draft.
- Walturn (n.d.). Best Product Management Tools.
- Walturn (n.d.). A Deep-Dive into Product-Market Fit.
- Walturn (2025). Bridging the Technical Gap: How AI OS Empowers Non-Technical Founders.
- U.S. News (2024). Companies Building AI Agents. U.S. News & World Report.
- DigitalOcean (n.d.). Types of AI Agents.
- AWS Prescriptive Guidance (2025). Deploy a Sample Java Microservice on Amazon EKS and Expose the Microservice Using an Application Load Balancer. Viewed April 17, 2025.
- Stack Overflow (2025). Airflow Distributed Message Flows with RabbitMQ Message Broker. Viewed April 17, 2025.
- Oryzae1824 (2025). “VERY IMPORTANT.” X, April 16.
- Kanpo_blog (2025). “Accenture Requires Full In-Office Attendance Five Days a Week Starting June: Is the Return to Office Trend Reemerging?” X, April 16.
- Kubotamas (2025). “Stages of AI Agent Evolution.” X, April 11.
- Suzuki, Y. (2025). “@kanpo_blog I Don’t Think Full In-Office Requirements Are Actually Causing Many Resignations.” X, April 16.


Mizuki Marumo/丸茂 瑞喜
CEO
23 years old. Multiple internships, <br> COO of a construction-focused AI startup.