Deep Reinforcement Learning Framework for Energy Management of Energy Hubs

Loading Events

Model-free and data-driven deep reinforcement learning (DRL) framework can develop an intelligent controller that can exploit information to optimally schedule the energy hub with the aim of minimizing energy costs and emissions. By posing the energy hub scheduling problem as a multi-dimensional continuous state and action space, this method can lead to a more efficient operation by considering nonlinear physical characteristics of the energy hub components like nonconvex feasible operating regions of combined heat and power (CHP) units, valve-point effects of power-only units, and fuel cell dynamic efficiency. Moreover, to provide great potential for the DDPG agent to learn an optimal policy in an efficient way, a hybrid forecasting model based on convolutional neural networks (CNNs) and bidirectional long short-term memories (BLSTMs) is developed to overcome the risk associated with PV power generation that can be highly intermittent, particularly on cloudy days. The effectiveness and applicability of the proposed scheduling framework in reducing energy costs and emissions while coping with uncertainties are demonstrated by comparing it against conventional robust optimization and stochastic programming approaches as well as state-of-the-art DRL methods in different case studies.
Speaker(s): Dr. Hussein Abdeltawab

Share This Story, Choose Your Platform!

Go to Top