Dalarna University's logo and link to the university's website

du.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • chicago-author-date
  • chicago-note-bibliography
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A review of reinforcement learning methodologies for controlling occupant comfort in buildings
Dalarna University, School of Technology and Business Studies, Microdata Analysis.ORCID iD: 0000-0003-4212-8582
Dalarna University, School of Technology and Business Studies, Microdata Analysis.ORCID iD: 0000-0002-0551-9341
Dalarna University, School of Technology and Business Studies, Energy Technology.ORCID iD: 0000-0002-2369-0169
Show others and affiliations
2019 (English)In: Sustainable cities and society, ISSN 2210-6707, Vol. 51, article id 101748Article in journal (Refereed) Published
Place, publisher, year, edition, pages
2019. Vol. 51, article id 101748
National Category
Building Technologies
Research subject
Research Profiles 2009-2020, Complex Systems – Microdata Analysis
Identifiers
URN: urn:nbn:se:du-30601DOI: 10.1016/j.scs.2019.101748ISI: 000493744700053Scopus ID: 2-s2.0-85070980900OAI: oai:DiVA.org:du-30601DiVA, id: diva2:1341214
Available from: 2019-08-08 Created: 2019-08-08 Last updated: 2023-02-17Bibliographically approved
In thesis
1. The reinforcement learning method: A feasible and sustainable control strategy for efficient occupant-centred building operation in smart cities
Open this publication in new window or tab >>The reinforcement learning method: A feasible and sustainable control strategy for efficient occupant-centred building operation in smart cities
2019 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Over half of the world’s population lives in urban areas, a trend which is expected to only grow as we move further into the future. With this increasing trend in urbanisation, challenges are presented in the form of the management of urban infrastructure systems. As an essential infrastructure of any city, the energy system presents itself as one of the biggest challenges. As cities expand in population and economically, global energy consumption increases and as a result so do greenhouse gas (GHG) emissions. To achieve the 2030 Agenda’s sustainable development goal on energy (SDG 7), renewable energy and energy efficiency have been shown as key strategies for attaining SDG 7. As the largest contributor to climate change, the building sector is responsible for more than half of the global final energy consumption and GHG emissions. As people spend most of their time indoors, the demand for energy is made worse as a result of maintaining the comfort level of the indoor environment. However, the emergence of the smart city and the internet of things (IoT) offers the opportunity for the smart management of buildings. Focusing on the latter strategy towards attaining SDG 7, intelligent building control offers significant potential for saving energy while respecting occupant comfort (OC). Most intelligent control strategies, however, rely on complex mathematical models which require a great deal of expertise to construct thereby costing in time and money. Furthermore, if these are inaccurate then energy is wasted and the comfort of occupants is decreased. Moreover, any change in the physical environment such as retrofits result in obsolete models which must be re-identified to match the new state of the environment. This model-based approach seems unsustainable and so a new model-free alternative is proposed. One such alternative is the reinforcement learning (RL) method. This method provides a beautiful solution to accomplishing the tradeoff between energy efficiency and OC within the smart city and more importantly to achieving SDG 7. To address the feasibility of RL as a sustainable control strategy for efficient occupant-centred building operation, a comprehensive review of RL for controlling OC in buildings as well as a case study implementing RL for improving OC via a window system are presented. The outcomes of each seem to suggest RL as a feasible solution, however, more work is required in the form of addressing current open issues such as cooperative multi-agent RL (MARL) needed for multi-occupant/multi-zonal buildings.

Place, publisher, year, edition, pages
Borlänge: Dalarna University, 2019
Series
Dalarna Licentiate Theses ; 12
Keywords
Markov decision processes, Reinforcement learning, Control, Building, Indoor comfort, Occupant
National Category
Building Technologies Computer and Information Sciences
Research subject
Research Profiles 2009-2020, Complex Systems – Microdata Analysis
Identifiers
urn:nbn:se:du-30613 (URN)978-91-88679-03-1 (ISBN)
Presentation
2019-11-01, B310, Borlänge, 10:00 (English)
Opponent
Supervisors
Available from: 2019-10-11 Created: 2019-10-07 Last updated: 2023-08-17Bibliographically approved
2. On the Feasibility of Reinforcement Learning in Single- and Multi-Agent Systems: The Cases of Indoor Climate and Prosumer Electricity Trading Communities
Open this publication in new window or tab >>On the Feasibility of Reinforcement Learning in Single- and Multi-Agent Systems: The Cases of Indoor Climate and Prosumer Electricity Trading Communities
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Over half of the world’s population live in urban areas, a trend which is expected to only grow as we move further into the future. With this increasing trend in urbanisation, challenges are presented in the form of the management of urban infrastructure systems. As an essential infrastructure of any city, the energy system presents itself as one of the biggest challenges. Indeed, as cities expand in population and economically, global energy consumption increases, and as a result, so do greenhouse gas (GHG) emissions. Key to realising the goals as laid out by the 2030 Agenda for Sustainable Development, is the energy transition - embodied in the goals pertaining to affordable and clean energy, sustainable cities and communities, and climate action. Renewable energy systems (RESs) and energy efficiency have been shown as key strategies towards achieving these goals. While the building sector is considered to be one of the biggest contributors to climate change, it is also seen as an area with many opportunities for realising the energy transition. Indeed, the emergence of the smart city and the internet of things (IoT), alongside Photovoltaic and battery technology, offers opportunities for both the smart management of buildings, as well as the opportunity to form self-sufficient peer-to-peer (P2P) electricity trading communities. Within this context, advanced building control offers significant potential for mitigating global warming, grid instability, soaring energy costs, and exposure to poor indoor building climates. Most advanced control strategies, however, rely on complex mathematical models, which require a great deal of expertise to construct, thereby costing in time and money, and are unlikely to be frequently updated - which can lead to sub-optimal or even wrong performance. Furthermore, arriving at solutions in economic settings as complex and dynamic as the P2P electricity markets referred to above, often leads to solutions that are computationally intractable. A model-based approach thus seems, as alluded to above, unsustainable, and I thus propose taking a model-free alternative instead. One such alternative is the reinforcement learning (RL) method. This method provides a beautiful solution that addresses many of the limitations seen in more classical approaches - those based on complex mathematical models - to single- and multi-agent systems. To address the feasibility of RL in the context of building systems, I have developed four papers. In studying the literature, while there is much review work in support of RL for controlling energy consumption, it was found that there were no such works analysing RL from a methodological perspective w.r.t. controlling the comfort level of building occupants. Thus, in Paper I, to fill in this gap in knowledge, a comprehensive review in this area was carried out. To follow up, in Paper II, a case study was conducted to further assess, among other things, the computational feasibility of RL for controlling occupant comfort in a single agent context. It was found that the RL method was able to improve thermal and indoor air quality by more than 90% when compared with historically observed occupant data. Broadening the scope of RL, Papers III and IV considered the feasibility of RL at the district scale by considering the efficient trade of renewable electricity in a peer-to-peer prosumer energy market. In particular, in Paper III, by extending an open source economic simulation framework, multi-agent reinforcement learning (MARL) was used to optimise a dynamic price policy for trading the locally produced electricity. Compared with a benchmark fixed price signal, the dynamic price mechanism arrived at by RL, increased community net profit by more than 28%, and median community self-sufficiency by more than 2%. Furthermore, emergent social-economic behaviours such as changes in supply w.r.t changes in price were identified. A limitation of Paper III, however, is that it was conducted in a single environment. To address this limitation and to assess the general validity of the proposed MARL-solution, in Paper IV a full factorial experiment based on the factors of climate - manifested in heterogeneous demand/supply profiles and associated battery parameters, community scale, and price mechanism, was conducted in order to ascertain the response of the community w.r.t net-loss (financial gain), self-sufficiency, and income equality from trading locally produced electricity. The central finding of Paper IV was that the community, w.r.t net-loss, performs significantly better under a learned dynamic price mechanism than under the benchmark fixed price mechanism, and furthermore, a community under such a dynamic price mechanism stands an odds of 2 to 1 in increased financial savings. 

Place, publisher, year, edition, pages
Borlänge: Dalarna University, 2023
Series
Dalarna Doctoral Dissertations ; 24
Keywords
Reinforcement Learning, Multi-Agent Reinforcement Learning, Buildings, Indoor Climate, Occupant Comfort, Positive Energy Districts, Peer-to-Peer Markets, Complex Adaptive Systems
National Category
Energy Systems Building Technologies Computer and Information Sciences
Identifiers
urn:nbn:se:du-45300 (URN)978-91-88679-40-6 (ISBN)
Public defence
2023-03-31, Room 311, Borlänge, 13:00 (English)
Opponent
Supervisors
Available from: 2023-02-21 Created: 2023-01-27 Last updated: 2023-08-17Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Han, MengjieMay, RossZhang, Xingxing

Search in DiVA

By author/editor
Han, MengjieMay, RossZhang, Xingxing
By organisation
Microdata AnalysisEnergy Technology
In the same journal
Sustainable cities and society
Building Technologies

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 257 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • chicago-author-date
  • chicago-note-bibliography
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf