Active structural control systems are among the most effective technologies for suppressing vibrations in civil engineering structures subjected to dynamic loads such as earthquakes and wind. However, conventional active control methods often suffer from performance degradation due to time delays, measurement noise, and changing operational conditions. To overcome these limitations, data-driven control strategies based on reinforcement learning have emerged as promising alternatives. This study introduces advanced soft actor-critic (SAC)–
based control approaches designed to enhance adaptability and robustness in complex real-world environments.
Challenges in Traditional Active Control Systems
Traditional controllers, such as linear quadratic Gaussian (LQG) control, rely on predefined system models and fixed parameters. In practical applications, uncertainties such as sensor noise, communication delays, and environmental variability can significantly reduce their effectiveness. These issues may lead to instability or loss of control performance during critical events. Consequently, there is a growing need for intelligent control strategies capable of learning from real-time data and adapting to evolving structural conditions.
Parameter Real-Time Regulator Strategy
The first proposed SAC-based approach introduces a parameter real-time regulator that dynamically adjusts controller parameters using feedback from the environment. By continuously updating its control policy, this strategy maintains effective vibration suppression even when system characteristics change. The regulator demonstrates strong adaptability, particularly in scenarios where external disturbances or structural properties vary over time, making it suitable for practical structural control applications.
Independent Data-Driven Controller
The second strategy replaces conventional control algorithms entirely with an independent reinforcement learning controller. This SAC-based controller learns optimal control actions directly from environmental interactions without relying on prior system models. As a result, it can handle highly nonlinear behavior and unknown dynamics. Experimental results indicate that this approach significantly improves displacement control compared to traditional methods, though its performance in acceleration mitigation varies depending on operating conditions.
Hybrid Compensator Strategy
The third strategy combines reinforcement learning with traditional control methods through a compensator mechanism. Instead of replacing the baseline controller, the SAC algorithm adjusts the control signal in real time to compensate for uncertainties, delays, and noise. This hybrid approach leverages the stability of classical control and the adaptability of machine learning. Among the tested methods, the compensator demonstrates the most effective reduction in structural responses, achieving the highest improvements in both drift and acceleration control.
Performance Evaluation Under Uncertainty
To assess robustness, the control environment was augmented with random time delays and noise, simulating realistic operating conditions. Seven performance criteria were used to evaluate effectiveness. Results show that all SAC-based strategies maintain stable and efficient control, whereas the traditional LQG controller loses effectiveness under severe disturbances. Strategy C (hybrid compensator) achieves the best overall performance, reducing peak inter-story drift by up to 84.50% and peak acceleration by 63.45%. These findings highlight the strong potential of reinforcement learning for next-generation intelligent structural control systems.
🏗️ Civil Engineering Awards
👉 Visit our Website: civilengineeringawards.com
#DataDrivenControl
#AIinEngineering
#InfrastructureSafety
#ControlSystems
#IntelligentInfrastructure
#StructuralHealth
#RobustControl
#EngineeringInnovation
#SeismicProtection
#FutureEngineering
Comments
Post a Comment