<?xml version="1.0" encoding="utf-8"?><?xml-stylesheet type="text/xsl" href="rss.xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>OpenRA-RL Blog</title>
        <link>https://openra-rl.dev/zh/blog</link>
        <description>OpenRA-RL Blog</description>
        <lastBuildDate>Thu, 19 Feb 2026 00:00:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>zh</language>
        <item>
            <title><![CDATA[Welcome to OpenRA-RL]]></title>
            <link>https://openra-rl.dev/zh/blog/welcome</link>
            <guid>https://openra-rl.dev/zh/blog/welcome</guid>
            <pubDate>Thu, 19 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[OpenRA-RL is now live! We're building a Gymnasium-style reinforcement learning environment for the OpenRA real-time strategy engine.]]></description>
            <content:encoded><![CDATA[<p>OpenRA-RL is now live! We're building a Gymnasium-style reinforcement learning environment for the OpenRA real-time strategy engine.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-is-openra-rl">What is OpenRA-RL?<a href="https://openra-rl.dev/zh/blog/welcome#what-is-openra-rl" class="hash-link" aria-label="What is OpenRA-RL?的直接链接" title="What is OpenRA-RL?的直接链接" translate="no">​</a></h2>
<p>OpenRA-RL lets you train AI agents that play full games of Red Alert through the OpenRA engine. The environment provides:</p>
<ul>
<li class=""><strong>Rich observations</strong>: 9-channel spatial tensor, per-unit stats, economy and military data</li>
<li class=""><strong>21 action types</strong>: Move, attack, build, train, guard, transport, and more</li>
<li class=""><strong>Real-time bridge</strong>: gRPC connection running at ~25 ticks/second</li>
<li class=""><strong>Multiple agent types</strong>: Scripted bots, LLM agents, and RL policies</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="getting-started">Getting Started<a href="https://openra-rl.dev/zh/blog/welcome#getting-started" class="hash-link" aria-label="Getting Started的直接链接" title="Getting Started的直接链接" translate="no">​</a></h2>
<p>Check out the <a class="" href="https://openra-rl.dev/zh/docs/getting-started">Getting Started guide</a> to set up your first agent, or explore the <a class="" href="https://openra-rl.dev/zh/docs/architecture">Architecture docs</a> to understand how everything connects.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="openra-bench">OpenRA-Bench<a href="https://openra-rl.dev/zh/blog/welcome#openra-bench" class="hash-link" aria-label="OpenRA-Bench的直接链接" title="OpenRA-Bench的直接链接" translate="no">​</a></h2>
<p>We're building <a href="https://huggingface.co/spaces/openenv/OpenRA-Bench" target="_blank" rel="noopener noreferrer" class="">OpenRA-Bench</a> — a community leaderboard for comparing agent performance with verified replay data. Stay tuned for the launch.</p>]]></content:encoded>
            <category>announcement</category>
        </item>
    </channel>
</rss>