<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Agon Serifi - Research Publications</title>
  <subtitle>Research publications by Agon Serifi, Disney Research Robotics</subtitle>
  <link href="https://aserifi.github.io/feed.xml" rel="self"/>
  <link href="https://aserifi.github.io/"/>
  <id>https://aserifi.github.io/</id>
  <updated>2026-04-06T00:00:00Z</updated>
  <author>
    <name>Agon Serifi</name>
    <email>agon.serifi@disneyresearch.com</email>
  </author>

  <entry>
    <title>Kamino: GPU-based Massively Parallel Simulation of Multi-Body Systems with Challenging Topologies</title>
    <link href="https://kamino.disneyresearch.com"/>
    <id>https://arxiv.org/abs/2603.16536</id>
    <updated>2025-03-01T00:00:00Z</updated>
    <summary>This paper presents Kamino, a GPU-based physics solver for massively parallel simulation of heterogeneous, highly-coupled mechanical systems. Built in Python with NVIDIA Warp and integrated into the Newton framework, Kamino natively supports complex topologies with kinematic loops, avoiding tree-based approximations.</summary>
  </entry>

  <entry>
    <title>Olaf: Bringing an Animated Character to Life in the Physical World</title>
    <link href="https://arxiv.org/abs/2512.16705"/>
    <id>https://arxiv.org/abs/2512.16705</id>
    <updated>2025-01-01T00:00:00Z</updated>
    <summary>In this work, we bring Olaf, an animated character, to life in the physical world, relying on reinforcement learning guided by animation references for control.</summary>
  </entry>

  <entry>
    <title>Robot Crash Course: Learning Soft and Stylized Falling</title>
    <link href="https://arxiv.org/abs/2511.10635"/>
    <id>https://arxiv.org/abs/2511.10635</id>
    <updated>2025-01-01T00:00:00Z</updated>
    <summary>This paper presents a method for learning controlled, soft falls for bipedal robots. Rather than preventing falls, we focus on minimizing physical damage and achieving desired end poses through a robot-agnostic reward function that balances impact minimization with pose control during reinforcement learning.</summary>
  </entry>

  <entry>
    <title>AMOR: Adaptive Character Control through Multi-Objective Reinforcement Learning</title>
    <link href="https://la.disneyresearch.com/publication/amor-adaptive-character-control-through-multi-objective-reinforcement-learning/"/>
    <id>https://doi.org/10.1145/3721238.3730656</id>
    <updated>2025-01-01T00:00:00Z</updated>
    <summary>This paper presents AMOR, a method for adaptive character control through multi-objective reinforcement learning. A single policy is trained conditioned on reward weights and motion context, enabling post-training tuning of reward trade-offs and zero-shot adaptation to diverse tracking behaviors.</summary>
  </entry>

  <entry>
    <title>Autonomous Human-Robot Interaction via Operator Imitation</title>
    <link href="https://arxiv.org/abs/2504.02724"/>
    <id>https://arxiv.org/abs/2504.02724</id>
    <updated>2025-01-01T00:00:00Z</updated>
    <summary>This paper presents a method for autonomously controlling expressive human-robot interactions by training a diffusion-based imitation model on real operator data. The resulting system predicts both continuous and discrete control inputs while conditioning on human pose.</summary>
  </entry>

  <entry>
    <title>Robot Motion Diffusion Model: Motion Generation for Robotic Characters</title>
    <link href="https://la.disneyresearch.com/publication/robot-motion-diffusion-model-motion-generation-for-robotic-characters/"/>
    <id>https://doi.org/10.1145/3680528.3687626</id>
    <updated>2024-12-01T00:00:00Z</updated>
    <summary>This paper presents a method for aligning text-conditioned kinematic motion generators with the capabilities of robotic characters. Our approach combines motion generation with physics-based character control using a critic that acts as a reward surrogate.</summary>
  </entry>

  <entry>
    <title>Spline-Based Transformers</title>
    <link href="https://la.disneyresearch.com/publication/spline-based-transformers/"/>
    <id>https://doi.org/10.1007/978-3-031-73016-0_1</id>
    <updated>2024-10-01T00:00:00Z</updated>
    <summary>A new class of Transformer models that eliminate the need for positional encoding. Spline-based Transformers embed an input sequence of elements as a smooth trajectory in latent space, significantly outperforming traditional transformer models in efficiency and accuracy.</summary>
  </entry>

  <entry>
    <title>VMP: Versatile Motion Priors for Robustly Tracking Motion on Physical Characters</title>
    <link href="https://la.disneyresearch.com/publication/vmp-versatile-motion-priors-for-robustly-tracking-motion-on-physical-characters/"/>
    <id>https://doi.org/10.1111/cgf.15175</id>
    <updated>2024-07-01T00:00:00Z</updated>
    <summary>This paper introduces a two-stage framework for transferring expressive and dynamic motions onto robotic characters. Starting from a large corpus of human motion capture data, we first learn a latent space representation, then use it as a prior for training a tracking controller through reinforcement learning.</summary>
  </entry>

  <entry>
    <title>Transformer-Based Neural Augmentation of Robot Simulation Representations</title>
    <link href="https://la.disneyresearch.com/publication/transformer-based-neural-augmentation-of-robot-simulation-representations/"/>
    <id>https://doi.org/10.1109/LRA.2023.3271812</id>
    <updated>2023-10-01T00:00:00Z</updated>
    <summary>This paper introduces a modular approach for neural augmentation of simulation states, improving robot simulation accuracy by targeting individual building blocks such as actuators and rigid bodies separately.</summary>
  </entry>
</feed>
