SuperFolders Robotics

Technology at SuperFolders Robotics

At SuperFolders Robotics, we are building a robot orchestration SaaS platform — a unified control layer that abstracts the complexity of multi-robot systems. Our mission is to empower businesses to seamlessly connect their autonomous robots, assign high-level tasks through a natural chat-based interface, and receive structured results, reports, and media outputs — without needing deep robotics expertise.

Our platform enables organizations to deploy robots as reliable teammates, capable of understanding intent, navigating complex environments, and collaborating with humans and other machines.

System Architecture

Robot Orchestration Layer

We provide a centralized orchestration engine in the cloud that:

  • Manages fleets of heterogeneous robots
  • Handles task planning, dispatching, and feedback loops
  • Supports semantic commands such as: "Inspect the west wall," "Deliver coffee to the front gate," or "Capture a thermal image of the transformer"

Edge Runtime Stack

Each robot runs a modular runtime built on:

  • NVIDIA Jetson (Orin/AGX/Xavier): Primary onboard compute unit for mission execution, sensor fusion, SLAM, and AI inference
  • ROS 2 Middleware: For modular, scalable robot software integration
  • MQTT/WebSocket Communication Layer: For persistent, low-latency two-way communication with our cloud controller
  • Local Storage & Buffering: For real-time logging, media caching, and offline resilience

Modular Accessory Kits & Cross-Platform Integration

To enable rapid integration with a wide variety of robot platforms (quadrupeds, UGVs, UAVs), we are developing SuperFolders Modular Accessory Kits that include:

These kits serve as plug-and-play adapters — turning diverse robotic hardware into smart, cloud-connected agents ready to receive tasks and return mission results via the SuperFolders platform.

Perception, Navigation & Mapping

SLAM & World Modeling

RTK Positioning

Centimeter-Level Accuracy: Using GNSS RTK correction data (via NTRIP/UBlox F9P), we enable 1–2 cm precision path planning for critical deployments.

Computer Vision & Edge AI

Edge AI Inference

We utilize YOLOv8, EfficientDet, and custom-trained models for real-time detection and classification:

Running on:

Future-Specific Models

We are building vertical-specific vision models optimized for:

Semantic & Environmental Awareness

We are teaching robots to understand and reason about their environments:

Example: Instead of "Go to waypoint 42.389, -71.128", users can say:
"Go to the south gate and photograph the control panel."

Context-Aware Mobility & Skill Training

We're training robots to navigate and operate safely in complex, variable environments:

Terrain-Adaptive Navigation

  • Forest paths: Avoiding roots, rocks, hidden holes
  • Construction sites: Handling debris, elevation, narrow spaces

Environmental Task Modules

  • Periodic patrols
  • Object pickup/delivery in unstructured zones
  • Multimodal inspection workflows

Sensor & Payload Expansion

We are integrating a suite of sensors that dramatically expand robot capabilities:

Sensor Type Purpose
Thermal Cameras (FLIR) Electrical inspection, heat loss analysis, HVAC diagnostics
Near-Infrared Cameras (NIR) Vegetation health, plant monitoring, agri-inspection
VOC Sensors Air quality, gas leak detection, environmental compliance checks
IMU / Wheel Encoders Odometry and local stabilization
External RTK Base Stations Standalone GNSS correction for mobile deployment
Custom Sensor Add-ons Configurable based on client use-cases and environments

Human-Robot Interaction (HRI)

Our robots will gain social and situational awareness through behavior modules:

Social Skills Engine (Planned)

What's Next

Our long-term vision includes:

Whether it's an outdoor patrol bot, an indoor inspection agent, or a multi-sensor mobile scanner — our stack adapts and evolves to power the future of autonomous business operations.