Paper

ChemDT: A stochastic decision transformer for chemical process control
Author
Junseop Shin, Joonsoo Park, Jaehyun Shim, Jong Min Lee*
Journal
Computers & Chemical Engineering
Page
109155
Year
2025

The rapid advancement of industries has complicated process modeling, as conventional model-based control methods struggle with models that inadequately capture system complexities and impose significant computational burdens on their use. Reinforcement learning (RL), which leverages practical operational data instead of explicit models, often adapts better to these complexities. However, RL’s need for extensive online exploration poses potential risks in sensitive environments like chemical processes. To address this, we propose an offline RL approach based on the Decision Transformer (DT) architecture, named ChemDT. ChemDT incorporates stochastic policies with maximum entropy regularization, broadening policy coverage under limited offline data. To mitigate DT’s vulnerability to stochastic environments, we introduce a monitoring variable, λ, enabling selective responses to significant stochastic events amidst pervasive disturbances. Validated on a Continuous Stirred Tank Reactor (CSTR) and an industrial-scale fed-batch reactor, our approach demonstrates superior control performance compared to other offline RL algorithms.