<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Prorok Lab</title><description>Activity stream from the Prorok Lab — papers, talks, videos, posts, and openings.</description><link>https://www.proroklab.org/</link><item><title>Computer Science Open Day at the Prorok Lab</title><link>https://www.proroklab.org/posts/open-day/</link><guid isPermaLink="true">https://www.proroklab.org/posts/open-day/</guid><description>Teams of small robots interpreted instructions given in plain English and autonomously carried out complex missions — staying synchronised, avoiding collisions, and adapting in real time to unexpected disruptions.</description><pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Concrete multi-agent path planning published in npj Robotics</title><link>https://www.proroklab.org/posts/bbc-click-fantastico/</link><guid isPermaLink="true">https://www.proroklab.org/posts/bbc-click-fantastico/</guid><description>This work bridges CONtinuous robot dynamics and scalable disCRETE search — what we call &quot;concrete&quot; planning. Our approach enables robots to execute agile, safe, and efficient motions even in dense environments.</description><pubDate>Wed, 18 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Concrete multi-agent path planning enabling kinodynamically aggressive maneuvers</title><link>https://www.proroklab.org/publications/concrete-mapf-npj-robotics/</link><guid isPermaLink="true">https://www.proroklab.org/publications/concrete-mapf-npj-robotics/</guid><description>PAPER — Keisuke Okumura, Guang Yang, Zhan Gao, Heedo Woo, Amanda Prorok</description><pubDate>Mon, 16 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Concrete Multi-Agent Path Planning Enabling Kinodynamically Aggressive Maneuvers</title><link>https://www.proroklab.org/videos/concrete-mapf/</link><guid isPermaLink="true">https://www.proroklab.org/videos/concrete-mapf/</guid><description>Framework for aggressive multi-robot coordination deployed with 40 robots (20 aerial, 8 ground, 12 obstacles) in a compact lab space. Published in npj Robotics (2026).</description><pubDate>Mon, 16 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Visit the lab at the Cambridge Festival — 21 March 2026</title><link>https://www.proroklab.org/posts/cambridge-festival-visit/</link><guid isPermaLink="true">https://www.proroklab.org/posts/cambridge-festival-visit/</guid><description>Researchers in our lab will be showcasing a system that lets teams of robots carry out instructions given in plain English — listening to human voice, then completing their mission completely on their own.</description><pubDate>Thu, 12 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Eight years of growth at the Prorok Lab</title><link>https://www.proroklab.org/posts/cambridge-academic-life/</link><guid isPermaLink="true">https://www.proroklab.org/posts/cambridge-academic-life/</guid><description>The Prorok Lab has grown so much in the last 8 years and continues to be a team driven by collaboration.</description><pubDate>Tue, 10 Mar 2026 00:00:00 GMT</pubDate></item><item><title>Three Prorok Lab papers accepted at ICLR 2026</title><link>https://www.proroklab.org/posts/iclr2026-accepted/</link><guid isPermaLink="true">https://www.proroklab.org/posts/iclr2026-accepted/</guid><description>Work that spans robot policy watermarking, higher-order interactions in multi-agent pathfinding, and the role of diversity in MARL — covering remote auditing of robot policies, hypergraph attention for dense MAPF, and when heterogeneity actually pays off.</description><pubDate>Tue, 10 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Matteo Bettini wins the 2025 IEEE RAS Best Dissertation Award</title><link>https://www.proroklab.org/posts/ieee-ras-feature/</link><guid isPermaLink="true">https://www.proroklab.org/posts/ieee-ras-feature/</guid><description>Matteo&apos;s thesis on neural diversity in multi-agent learning provides a comprehensive and impactful study, demonstrating that diversity is a previously under-explored yet fundamental factor for effective collective learning in MARL.</description><pubDate>Fri, 06 Feb 2026 00:00:00 GMT</pubDate></item><item><title>Graph Attention-Guided Search presented at AAAI 2026</title><link>https://www.proroklab.org/posts/aaai2026-pathfinding/</link><guid isPermaLink="true">https://www.proroklab.org/posts/aaai2026-pathfinding/</guid><description>Big congratulations to Rishabh Jain, Keisuke Okumura and Michael Amir on presenting &quot;Graph Attention-Guided Search for Dense Multi-Agent Pathfinding&quot; at AAAI 2026 — pushing forward the state of the art in scalable multi-agent coordination.</description><pubDate>Tue, 27 Jan 2026 00:00:00 GMT</pubDate></item><item><title>When Is Diversity Rewarded in Cooperative Multi-Agent Learning?</title><link>https://www.proroklab.org/publications/iclr2026-diversity-rewarded/</link><guid isPermaLink="true">https://www.proroklab.org/publications/iclr2026-diversity-rewarded/</guid><description>PAPER — Michael Amir, Matteo Bettini, Amanda Prorok</description><pubDate>Tue, 20 Jan 2026 00:00:00 GMT</pubDate></item><item><title>Pairwise is Not Enough: Hypergraph Neural Networks for Multi-Agent Pathfinding</title><link>https://www.proroklab.org/publications/iclr2026-hypergraph-mapf/</link><guid isPermaLink="true">https://www.proroklab.org/publications/iclr2026-hypergraph-mapf/</guid><description>PAPER — Rishabh Jain, Keisuke Okumura, Michael Amir, Pietro Liò, Amanda Prorok</description><pubDate>Tue, 20 Jan 2026 00:00:00 GMT</pubDate></item><item><title>Remotely Detectable Robot Policy Watermarking</title><link>https://www.proroklab.org/publications/iclr2026-watermarking/</link><guid isPermaLink="true">https://www.proroklab.org/publications/iclr2026-watermarking/</guid><description>PAPER — Michael Amir, Manon Flageat, Amanda Prorok</description><pubDate>Tue, 20 Jan 2026 00:00:00 GMT</pubDate></item><item><title>Looking ahead to 2026</title><link>https://www.proroklab.org/posts/multiagent-systems-feature/</link><guid isPermaLink="true">https://www.proroklab.org/posts/multiagent-systems-feature/</guid><description>Grateful for a brilliant 2025 and excited for what lies ahead in 2026 — here&apos;s to continued growth, new opportunities, and building what&apos;s next.</description><pubDate>Mon, 12 Jan 2026 00:00:00 GMT</pubDate></item><item><title>No-Regret Thompson Sampling for Finite-Horizon Markov Decision Processes with Gaussian Processes</title><link>https://www.proroklab.org/publications/neurips2025-thompson-sampling/</link><guid isPermaLink="true">https://www.proroklab.org/publications/neurips2025-thompson-sampling/</guid><description>PAPER — Jasmine Bayrooti, Sattar Vakili, Amanda Prorok, Carl Henrik Ek</description><pubDate>Wed, 10 Dec 2025 00:00:00 GMT</pubDate></item><item><title>Improved regret bounds for Thompson sampling — NeurIPS 2025</title><link>https://www.proroklab.org/posts/jasmine-bayrooti-joins/</link><guid isPermaLink="true">https://www.proroklab.org/posts/jasmine-bayrooti-joins/</guid><description>Jasmine Bayrooti will be presenting work that establishes no-regret guarantees for Thompson sampling in episodic Markov Decision Processes modeled via joint multi-output Gaussian Processes.</description><pubDate>Tue, 02 Dec 2025 00:00:00 GMT</pubDate></item><item><title>Joint away day on the future of intelligent robotics</title><link>https://www.proroklab.org/posts/continual-learning-robotics/</link><guid isPermaLink="true">https://www.proroklab.org/posts/continual-learning-robotics/</guid><description>Bringing together diverse teams, perspectives, and research agendas sparked exactly the kind of conversations we need as robotics moves rapidly from controlled environments into the dynamic, unpredictable real world.</description><pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate></item><item><title>LaGAT: neural search outperforms classic MAPF planners — AAAI 2026</title><link>https://www.proroklab.org/posts/aaai2026-mapf/</link><guid isPermaLink="true">https://www.proroklab.org/posts/aaai2026-mapf/</guid><description>For (arguably) the first time, our neural approach — LaGAT — clearly surpasses leading search-based planners. The key: fusing GNN-based imitation learning with the LaCAM search planner to get the best of both worlds.</description><pubDate>Mon, 17 Nov 2025 00:00:00 GMT</pubDate></item><item><title>Graph Attention-Guided Search for Dense Multi-Agent Pathfinding</title><link>https://www.proroklab.org/publications/lagat-aaai26/</link><guid isPermaLink="true">https://www.proroklab.org/publications/lagat-aaai26/</guid><description>PAPER — Rishabh Jain, Keisuke Okumura, Michael Amir, Amanda Prorok</description><pubDate>Sat, 08 Nov 2025 00:00:00 GMT</pubDate></item><item><title>Lab research featured on Brazil&apos;s Fantástico</title><link>https://www.proroklab.org/posts/science-comms-impact/</link><guid isPermaLink="true">https://www.proroklab.org/posts/science-comms-impact/</guid><description>On air we discussed how our lab is pushing the boundaries of collective intelligence in multi-robot and multi-agent systems — and what it means to take this work to a broader audience.</description><pubDate>Mon, 27 Oct 2025 00:00:00 GMT</pubDate></item><item><title>D4orm: diffusion-denoised multi-robot trajectories — IROS 2025</title><link>https://www.proroklab.org/posts/iros2025-presence/</link><guid isPermaLink="true">https://www.proroklab.org/posts/iros2025-presence/</guid><description>D4orm introduces a novel optimization framework for multi-robot trajectory generation. Inspired by diffusion generative models, it iteratively denoises noisy trajectories into smooth, collision-free, dynamically feasible ones — ready for real-world deployment.</description><pubDate>Tue, 14 Oct 2025 00:00:00 GMT</pubDate></item><item><title>Hiring: Postdoctoral Research Associate in Computer Vision &amp; Robotics</title><link>https://www.proroklab.org/posts/hiring-ra-vision/</link><guid isPermaLink="true">https://www.proroklab.org/posts/hiring-ra-vision/</guid><description>We&apos;re looking for a motivated Postdoc to develop and deploy computer vision systems on real robots — from tracking and 3D reconstruction to visual SLAM and in-field testing with aerial and ground platforms.</description><pubDate>Thu, 09 Oct 2025 00:00:00 GMT</pubDate></item><item><title>Prorok Lab at CoRL 2025</title><link>https://www.proroklab.org/posts/corl2025-presence/</link><guid isPermaLink="true">https://www.proroklab.org/posts/corl2025-presence/</guid><description>Michael Amir presented ReCoDe at the poster session, Peter Woo showcased the Sanity quadrotor at the Open Source Hardware Workshop, and our alumnus Steven Morad delivered a spotlight talk at the RemembRL Workshop.</description><pubDate>Wed, 01 Oct 2025 00:00:00 GMT</pubDate></item><item><title>Mosaic Robotics wins the European Robotics League Smart City competition</title><link>https://www.proroklab.org/posts/smart-city-robotics/</link><guid isPermaLink="true">https://www.proroklab.org/posts/smart-city-robotics/</guid><description>Our winning solution leverages our expertise in multi-agent systems and perception, utilizing an off-board camera network to provide the robot with real-time contextual information — demonstrating the power of collective intelligence in a dynamic, social setting.</description><pubDate>Mon, 29 Sep 2025 00:00:00 GMT</pubDate></item><item><title>Amanda&apos;s Science Robotics Viewpoint on collective robot intelligence</title><link>https://www.proroklab.org/posts/extending-robot-minds-collective/</link><guid isPermaLink="true">https://www.proroklab.org/posts/extending-robot-minds-collective/</guid><description>The article champions a paradigm shift that lies at the heart of our lab&apos;s mission: moving away from building do-it-all robots, and instead advancing collective robotic intelligence — diverse, specialized robots that learn, adapt, and collaborate.</description><pubDate>Thu, 25 Sep 2025 00:00:00 GMT</pubDate></item><item><title>Extending robot minds through collective learning</title><link>https://www.proroklab.org/publications/extending-robot-minds/</link><guid isPermaLink="true">https://www.proroklab.org/publications/extending-robot-minds/</guid><description>PAPER — Amanda Prorok</description><pubDate>Wed, 24 Sep 2025 00:00:00 GMT</pubDate></item><item><title>&quot;Extending robot minds through collective learning&quot; — Amanda&apos;s Science Robotics Viewpoint</title><link>https://www.proroklab.org/posts/science-robotics-extending-minds/</link><guid isPermaLink="true">https://www.proroklab.org/posts/science-robotics-extending-minds/</guid><description>I argue for a new paradigm: collective robotic intelligence. Instead of a single, central &quot;brain&quot;, we should design robot collectives made up of diverse, specialized agents that learn and work together.</description><pubDate>Wed, 24 Sep 2025 00:00:00 GMT</pubDate></item><item><title>ReCoDe accepted at CoRL 2025</title><link>https://www.proroklab.org/posts/corl2025-marl/</link><guid isPermaLink="true">https://www.proroklab.org/posts/corl2025-marl/</guid><description>ReCoDe introduces a reinforcement learning framework that dynamically adapts constraints, enabling agents to coordinate more effectively in challenging environments.</description><pubDate>Tue, 16 Sep 2025 00:00:00 GMT</pubDate></item><item><title>ReCoDe: Reinforcement Learning-based Dynamic Constraint Design for Multi-Agent Coordination</title><link>https://www.proroklab.org/publications/recode-corl2025/</link><guid isPermaLink="true">https://www.proroklab.org/publications/recode-corl2025/</guid><description>PAPER — Michael Amir, Guang Yang, Zhan Gao, Keisuke Okumura, Heedo Woo, Amanda Prorok</description><pubDate>Tue, 12 Aug 2025 00:00:00 GMT</pubDate></item><item><title>ReCoDe: Reinforcement Learning-based Dynamic Constraint Design for Multi-Agent Coordination</title><link>https://www.proroklab.org/videos/recode-corl2025/</link><guid isPermaLink="true">https://www.proroklab.org/videos/recode-corl2025/</guid><description>Hybrid decentralized framework combining optimization-based control with MARL. Learns dynamic constraints to improve multi-agent coordination. Presented at CoRL 2025.</description><pubDate>Tue, 12 Aug 2025 00:00:00 GMT</pubDate></item><item><title>Introducing Sanity, our agile quadrotor for multi-agent indoor flight</title><link>https://www.proroklab.org/posts/multi-drone-systems/</link><guid isPermaLink="true">https://www.proroklab.org/posts/multi-drone-systems/</guid><description>At just 72g with brushless motors, onboard WiFi, and dual processors, Sanity hits speeds over 6 m/s and supports large-scale experiments with up to 20 drones flying simultaneously indoors.</description><pubDate>Thu, 10 Jul 2025 00:00:00 GMT</pubDate></item><item><title>D4orm: Multi-Robot Trajectories with Dynamics-aware Diffusion Denoised Deformations</title><link>https://www.proroklab.org/publications/d4orm-iros2025/</link><guid isPermaLink="true">https://www.proroklab.org/publications/d4orm-iros2025/</guid><description>PAPER — Yixiao Zhang, Keisuke Okumura, Heedo Woo, Ajay Shankar, Amanda Prorok</description><pubDate>Fri, 04 Jul 2025 00:00:00 GMT</pubDate></item><item><title>D4orm: Multi-Robot Trajectories with Dynamics-aware Diffusion Denoised Deformations</title><link>https://www.proroklab.org/videos/d4orm-iros2025/</link><guid isPermaLink="true">https://www.proroklab.org/videos/d4orm-iros2025/</guid><description>Optimization method generating kinodynamically feasible, collision-free multi-robot trajectories using diffusion denoising. Evaluated for teams of up to 16 robots in 2D and 3D worlds. Presented at IROS 2025.</description><pubDate>Fri, 04 Jul 2025 00:00:00 GMT</pubDate></item><item><title>Three Prorok Lab PhDs graduate</title><link>https://www.proroklab.org/posts/phd-graduation-2025/</link><guid isPermaLink="true">https://www.proroklab.org/posts/phd-graduation-2025/</guid><description>Three incredible members of the Prorok Lab — Steven Morad, Jan Blumenkamp and Ryan Kortvelesy — have recently graduated with their PhDs. Their dedication and brilliance have been a huge part of our lab&apos;s journey.</description><pubDate>Tue, 17 Jun 2025 00:00:00 GMT</pubDate></item><item><title>Amanda presents System Neural Diversity at RLDM 2025</title><link>https://www.proroklab.org/posts/rldm2025-marl/</link><guid isPermaLink="true">https://www.proroklab.org/posts/rldm2025-marl/</guid><description>The talk highlighted why behavioral diversity is more than a design choice — it&apos;s a necessity for achieving complex, high-level coordination in large robot teams.</description><pubDate>Mon, 16 Jun 2025 00:00:00 GMT</pubDate></item><item><title>Language-conditioned offline RL for multi-robot navigation — ICRA 2025</title><link>https://www.proroklab.org/posts/icra2025-offline-rl/</link><guid isPermaLink="true">https://www.proroklab.org/posts/icra2025-offline-rl/</guid><description>Instead of using LLMs to directly control robots, we use them to translate human language into a compact, consistent latent vector that captures the essence of the command — and our offline-trained robots act on that vector regardless of how the instruction is phrased.</description><pubDate>Fri, 23 May 2025 00:00:00 GMT</pubDate></item><item><title>DVM-SLAM: decentralized cooperative monocular SLAM — ICRA 2025</title><link>https://www.proroklab.org/posts/multi-robot-slam/</link><guid isPermaLink="true">https://www.proroklab.org/posts/multi-robot-slam/</guid><description>DVM-SLAM enables multiple robots to collaboratively map unknown environments while simultaneously tracking their positions — using only low-cost, lightweight monocular cameras and operating in a fully decentralized manner.</description><pubDate>Wed, 21 May 2025 00:00:00 GMT</pubDate></item><item><title>DVM-SLAM: Decentralized Visual Monocular Simultaneous Localization and Mapping for Multi-Agent Systems</title><link>https://www.proroklab.org/publications/dvm-slam-icra2025/</link><guid isPermaLink="true">https://www.proroklab.org/publications/dvm-slam-icra2025/</guid><description>PAPER — Joshua Bird, Jan Blumenkamp, Amanda Prorok</description><pubDate>Thu, 15 May 2025 00:00:00 GMT</pubDate></item><item><title>Language-Conditioned Offline RL for Multi-Robot Navigation</title><link>https://www.proroklab.org/publications/language-conditioned-offline-rl-icra2025/</link><guid isPermaLink="true">https://www.proroklab.org/publications/language-conditioned-offline-rl-icra2025/</guid><description>PAPER — Steven Morad, Ajay Shankar, Jan Blumenkamp, Amanda Prorok</description><pubDate>Thu, 15 May 2025 00:00:00 GMT</pubDate></item><item><title>Efficient Model-Based Reinforcement Learning Through Optimistic Thompson Sampling</title><link>https://www.proroklab.org/publications/iclr2025-thompson-sampling/</link><guid isPermaLink="true">https://www.proroklab.org/publications/iclr2025-thompson-sampling/</guid><description>PAPER — Jasmine Bayrooti, Carl Henrik Ek, Amanda Prorok</description><pubDate>Thu, 24 Apr 2025 00:00:00 GMT</pubDate></item><item><title>HOT-GP: principled exploration in model-based RL — ICLR 2025</title><link>https://www.proroklab.org/posts/efficient-model-based-rl/</link><guid isPermaLink="true">https://www.proroklab.org/posts/efficient-model-based-rl/</guid><description>HOT-GP is a model-based reinforcement learning algorithm that explores efficiently by reasoning about joint uncertainty over both environment dynamics and rewards — sampling transitions conditioned on optimistic rewards to imagine plausible, high-value futures.</description><pubDate>Thu, 24 Apr 2025 00:00:00 GMT</pubDate></item><item><title>Welcome to Manon Flageat</title><link>https://www.proroklab.org/posts/welcome-research-team/</link><guid isPermaLink="true">https://www.proroklab.org/posts/welcome-research-team/</guid><description>Manon brings a wealth of expertise in diversity-seeking machine learning and reinforcement learning algorithms — and we&apos;re thrilled to have her on board.</description><pubDate>Wed, 23 Apr 2025 00:00:00 GMT</pubDate></item><item><title>Synthesizing Multi-Agent Policies: From Cooperative Robot Perception to Human-Led Fleet Control</title><link>https://www.proroklab.org/talks/ace-network-forum-oxford-2025/</link><guid isPermaLink="true">https://www.proroklab.org/talks/ace-network-forum-oxford-2025/</guid><description>Invited · ACE Network Forum — University of Oxford</description><pubDate>Wed, 26 Mar 2025 00:00:00 GMT</pubDate></item><item><title>Amanda keynotes at the ACE Network Forum in Oxford</title><link>https://www.proroklab.org/posts/ace-network-forum-oxford/</link><guid isPermaLink="true">https://www.proroklab.org/posts/ace-network-forum-oxford/</guid><description>Amanda shared our lab&apos;s latest research on how multi-agent systems can enable smarter decision-making — whether through cooperative robot perception or optimizing human-led fleet control.</description><pubDate>Wed, 26 Mar 2025 00:00:00 GMT</pubDate></item><item><title>Lab research featured on BBC Click</title><link>https://www.proroklab.org/posts/bbc-click-feature/</link><guid isPermaLink="true">https://www.proroklab.org/posts/bbc-click-feature/</guid><description>From swarm robotics to autonomous collaboration, our innovations are shaping the future of intelligent systems.</description><pubDate>Mon, 17 Mar 2025 00:00:00 GMT</pubDate></item><item><title>Co-Optimizing Reconfigurable Environments and Policies for Decentralized Multi-Agent Navigation</title><link>https://www.proroklab.org/publications/co-optimizing-environments-tro2025/</link><guid isPermaLink="true">https://www.proroklab.org/publications/co-optimizing-environments-tro2025/</guid><description>PAPER — Zhan Gao, Guang Yang, Amanda Prorok</description><pubDate>Sat, 01 Mar 2025 00:00:00 GMT</pubDate></item><item><title>Hiring: Research Assistant in multi-robot mission control via natural language</title><link>https://www.proroklab.org/posts/hiring-ra-natural-language/</link><guid isPermaLink="true">https://www.proroklab.org/posts/hiring-ra-natural-language/</guid><description>The Prorok Lab is looking for a talented Research Assistant in Adaptive Multi-Robot Mission Control via Natural Language — at the intersection of LLMs, NLP, and multi-robot systems.</description><pubDate>Tue, 04 Feb 2025 00:00:00 GMT</pubDate></item><item><title>Research Assistant: Adaptive Multi-Robot Mission Control via Natural Language</title><link>https://www.proroklab.org/openings/phd-perception-aerial-swarms/</link><guid isPermaLink="true">https://www.proroklab.org/openings/phd-perception-aerial-swarms/</guid><description>Develop robot policies that respond to high-level natural language commands at the intersection of LLMs, NLP, and multi-robot systems. Collaborate with students and researchers on cutting-edge projects in a world-class research environment.</description><pubDate>Tue, 04 Feb 2025 00:00:00 GMT</pubDate></item><item><title>BenchMARL: Benchmarking Multi-Agent Reinforcement Learning</title><link>https://www.proroklab.org/publications/benchmarl-jmlr2024/</link><guid isPermaLink="true">https://www.proroklab.org/publications/benchmarl-jmlr2024/</guid><description>PAPER — Matteo Bettini, Amanda Prorok, Vincent Moens</description><pubDate>Thu, 19 Dec 2024 00:00:00 GMT</pubDate></item><item><title>BenchMARL presented at NeurIPS 2024</title><link>https://www.proroklab.org/posts/neurips2024-benchmarl/</link><guid isPermaLink="true">https://www.proroklab.org/posts/neurips2024-benchmarl/</guid><description>BenchMARL is a library designed to simplify and standardize benchmarking for Multi-Agent Reinforcement Learning — letting researchers seamlessly mix and match MARL algorithms, tasks, and models with rigorous reproducibility.</description><pubDate>Thu, 19 Dec 2024 00:00:00 GMT</pubDate></item><item><title>BenchMARL: Benchmarking Multi-Agent Reinforcement Learning</title><link>https://www.proroklab.org/talks/neurips2024-benchmarl/</link><guid isPermaLink="true">https://www.proroklab.org/talks/neurips2024-benchmarl/</guid><description>Conference · NeurIPS 2024</description><pubDate>Thu, 12 Dec 2024 00:00:00 GMT</pubDate></item></channel></rss>