<?xml version="1.0" encoding="utf-8"?><?xml-stylesheet type="text/xsl" href="atom.xsl"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://openra-rl.dev/zh/blog</id>
    <title>OpenRA-RL Blog</title>
    <updated>2026-02-19T00:00:00.000Z</updated>
    <generator>https://github.com/jpmonette/feed</generator>
    <link rel="alternate" href="https://openra-rl.dev/zh/blog"/>
    <subtitle>OpenRA-RL Blog</subtitle>
    <icon>https://openra-rl.dev/zh/img/favicon.svg</icon>
    <entry>
        <title type="html"><![CDATA[Welcome to OpenRA-RL]]></title>
        <id>https://openra-rl.dev/zh/blog/welcome</id>
        <link href="https://openra-rl.dev/zh/blog/welcome"/>
        <updated>2026-02-19T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[OpenRA-RL is now live! We're building a Gymnasium-style reinforcement learning environment for the OpenRA real-time strategy engine.]]></summary>
        <content type="html"><![CDATA[<p>OpenRA-RL is now live! We're building a Gymnasium-style reinforcement learning environment for the OpenRA real-time strategy engine.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-is-openra-rl">What is OpenRA-RL?<a href="https://openra-rl.dev/zh/blog/welcome#what-is-openra-rl" class="hash-link" aria-label="What is OpenRA-RL?的直接链接" title="What is OpenRA-RL?的直接链接" translate="no">​</a></h2>
<p>OpenRA-RL lets you train AI agents that play full games of Red Alert through the OpenRA engine. The environment provides:</p>
<ul>
<li class=""><strong>Rich observations</strong>: 9-channel spatial tensor, per-unit stats, economy and military data</li>
<li class=""><strong>21 action types</strong>: Move, attack, build, train, guard, transport, and more</li>
<li class=""><strong>Real-time bridge</strong>: gRPC connection running at ~25 ticks/second</li>
<li class=""><strong>Multiple agent types</strong>: Scripted bots, LLM agents, and RL policies</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="getting-started">Getting Started<a href="https://openra-rl.dev/zh/blog/welcome#getting-started" class="hash-link" aria-label="Getting Started的直接链接" title="Getting Started的直接链接" translate="no">​</a></h2>
<p>Check out the <a class="" href="https://openra-rl.dev/zh/docs/getting-started">Getting Started guide</a> to set up your first agent, or explore the <a class="" href="https://openra-rl.dev/zh/docs/architecture">Architecture docs</a> to understand how everything connects.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="openra-bench">OpenRA-Bench<a href="https://openra-rl.dev/zh/blog/welcome#openra-bench" class="hash-link" aria-label="OpenRA-Bench的直接链接" title="OpenRA-Bench的直接链接" translate="no">​</a></h2>
<p>We're building <a href="https://huggingface.co/spaces/openenv/OpenRA-Bench" target="_blank" rel="noopener noreferrer" class="">OpenRA-Bench</a> — a community leaderboard for comparing agent performance with verified replay data. Stay tuned for the launch.</p>]]></content>
        <author>
            <name>OpenRA-RL Team</name>
            <uri>https://github.com/yxc20089/OpenRA-RL</uri>
        </author>
        <category label="announcement" term="announcement"/>
    </entry>
</feed>