Summary of CityLearn Tutorial

Summary of "CityLearn Tutorial"

This tutorial, presented by Kingsley (a graduate student at UT Austin), introduces the CityLearn (Cen) simulation environment designed for implementing and benchmarking control algorithms for distributed energy resources (DERs) in Grid-Interactive Buildings (GIBs). The tutorial is part of the Climate Change AI Summer School 2024 and focuses on controlling batteries and thermal energy storage in a multi-building environment using rule-based and Reinforcement Learning (RL) algorithms.


Main Ideas and Concepts


Methodology and Instructions (Experiments & Exercises)

  1. Setup and Environment Initialization:
    • Confirm Python version (≥3.7).
    • Install CityLearn and dependencies.
    • Load dataset and select buildings and simulation period randomly but reproducibly (fixed random seed).
    • Limit observations (hour of day) and use single-agent centralized control.
  2. Baseline and Random Control Agents:
    • Baseline: No battery control, only PV self-generation.
    • Random agent: Controls batteries with random actions.
    • Run inference and visualize KPIs and load profiles.
  3. Rule-Based Control (RBC) Agent:
    • Implement simple if-then rules based on hour of day to charge/discharge batteries.
    • Example: Charge battery in first half of day, discharge in second half.
    • Compare performance against baseline and random agents.
    • Exercise: Tune RBC logic to reduce cost, emissions, peak, ramping, and improve load factor by at least 5%.
  4. Tabular Q-learning Agent:
    • Discretize continuous observations and actions into bins (e.g., 24 for hour, 12 for battery actions).
    • Initialize Q-table and train using episodes with epsilon-greedy exploration.
    • Visualize Q-table,

Category

Educational

Video