Högskolan Dalarnas logga och länk till högskolans webbplats

du.sePublikationer
Ändra sökning
Avgränsa sökresultatet
1 - 9 av 9
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • chicago-author-date
  • chicago-note-bibliography
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    May, Ross
    et al.
    Högskolan Dalarna, Institutionen för information och teknik, Mikrodataanalys.
    Huang, Pei
    Högskolan Dalarna, Institutionen för information och teknik, Energiteknik.
    A multi-agent reinforcement learning approach for investigating and optimising peer-to-peer prosumer energy markets2023Ingår i: Applied Energy, ISSN 0306-2619, E-ISSN 1872-9118, Vol. 334, artikel-id 120705Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Current power grid infrastructure was not designed with climate change in mind, and, therefore, its stability, especially at peak demand periods, has been compromised. Furthermore, in light of the current UN’s Intergovernmental Panel on Climate Change reports concerning global warming and the goal of the 2015 Paris climate agreement to constrain global temperature increase to within 1.5–2 °C above pre-industrial levels, urgent sociotechnical measures need to be taken. Together, Smart Microgrid and renewable energy technology have been proposed as a possible solution to help mitigate global warming and grid instability. Within this context, well-managed demand-side flexibility is crucial for efficiently utilising on-site solar energy. To this end, a well-designed dynamic pricing mechanism can organise the actors within such a system to enable the efficient trade of on-site energy, therefore contributing to the decarbonisation and grid security goals alluded to above. However, designing such a mechanism in an economic setting as complex and dynamic as the one above often leads to computationally intractable solutions. To overcome this problem, in this work, we use multi-agent reinforcement learning (MARL) alongside Foundation – an open-source economic simulation framework built by Salesforce Research – to design a dynamic price policy. By incorporating a peer-to-peer (P2P) community of prosumers with heterogeneous demand/supply profiles and battery storage into Foundation, our results from data-driven simulations show that MARL, when compared with a baseline fixed price signal, can learn a dynamic price signal that achieves both a lower community electricity cost, and a higher community self-sufficiency. Furthermore, emergent social–economic behaviours, such as price elasticity, and community coordination leading to high grid feed-in during periods of overall excess photovoltaic (PV) supply and, conversely, high community trading during overall low PV supply, have also been identified. Our proposed approach can be used by practitioners to aid them in designing P2P energy trading markets.

    Ladda ner fulltext (pdf)
    fulltext
  • 2.
    May, Ross
    et al.
    Högskolan Dalarna, Institutionen för information och teknik, Mikrodataanalys.
    Carling, Kenneth
    Högskolan Dalarna, Institutionen för information och teknik, Mikrodataanalys.
    Huang, Pei
    Högskolan Dalarna, Institutionen för information och teknik, Energiteknik.
    Does a smart agent overcome the tragedy of the commons in residential prosumer communities?2023Artikel i tidskrift (Refereegranskat)
  • 3.
    May, Ross
    Högskolan Dalarna, Institutionen för information och teknik, Mikrodataanalys.
    On the Feasibility of Reinforcement Learning in Single- and Multi-Agent Systems: The Cases of Indoor Climate and Prosumer Electricity Trading Communities2023Doktorsavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Over half of the world’s population live in urban areas, a trend which is expected to only grow as we move further into the future. With this increasing trend in urbanisation, challenges are presented in the form of the management of urban infrastructure systems. As an essential infrastructure of any city, the energy system presents itself as one of the biggest challenges. Indeed, as cities expand in population and economically, global energy consumption increases, and as a result, so do greenhouse gas (GHG) emissions. Key to realising the goals as laid out by the 2030 Agenda for Sustainable Development, is the energy transition - embodied in the goals pertaining to affordable and clean energy, sustainable cities and communities, and climate action. Renewable energy systems (RESs) and energy efficiency have been shown as key strategies towards achieving these goals. While the building sector is considered to be one of the biggest contributors to climate change, it is also seen as an area with many opportunities for realising the energy transition. Indeed, the emergence of the smart city and the internet of things (IoT), alongside Photovoltaic and battery technology, offers opportunities for both the smart management of buildings, as well as the opportunity to form self-sufficient peer-to-peer (P2P) electricity trading communities. Within this context, advanced building control offers significant potential for mitigating global warming, grid instability, soaring energy costs, and exposure to poor indoor building climates. Most advanced control strategies, however, rely on complex mathematical models, which require a great deal of expertise to construct, thereby costing in time and money, and are unlikely to be frequently updated - which can lead to sub-optimal or even wrong performance. Furthermore, arriving at solutions in economic settings as complex and dynamic as the P2P electricity markets referred to above, often leads to solutions that are computationally intractable. A model-based approach thus seems, as alluded to above, unsustainable, and I thus propose taking a model-free alternative instead. One such alternative is the reinforcement learning (RL) method. This method provides a beautiful solution that addresses many of the limitations seen in more classical approaches - those based on complex mathematical models - to single- and multi-agent systems. To address the feasibility of RL in the context of building systems, I have developed four papers. In studying the literature, while there is much review work in support of RL for controlling energy consumption, it was found that there were no such works analysing RL from a methodological perspective w.r.t. controlling the comfort level of building occupants. Thus, in Paper I, to fill in this gap in knowledge, a comprehensive review in this area was carried out. To follow up, in Paper II, a case study was conducted to further assess, among other things, the computational feasibility of RL for controlling occupant comfort in a single agent context. It was found that the RL method was able to improve thermal and indoor air quality by more than 90% when compared with historically observed occupant data. Broadening the scope of RL, Papers III and IV considered the feasibility of RL at the district scale by considering the efficient trade of renewable electricity in a peer-to-peer prosumer energy market. In particular, in Paper III, by extending an open source economic simulation framework, multi-agent reinforcement learning (MARL) was used to optimise a dynamic price policy for trading the locally produced electricity. Compared with a benchmark fixed price signal, the dynamic price mechanism arrived at by RL, increased community net profit by more than 28%, and median community self-sufficiency by more than 2%. Furthermore, emergent social-economic behaviours such as changes in supply w.r.t changes in price were identified. A limitation of Paper III, however, is that it was conducted in a single environment. To address this limitation and to assess the general validity of the proposed MARL-solution, in Paper IV a full factorial experiment based on the factors of climate - manifested in heterogeneous demand/supply profiles and associated battery parameters, community scale, and price mechanism, was conducted in order to ascertain the response of the community w.r.t net-loss (financial gain), self-sufficiency, and income equality from trading locally produced electricity. The central finding of Paper IV was that the community, w.r.t net-loss, performs significantly better under a learned dynamic price mechanism than under the benchmark fixed price mechanism, and furthermore, a community under such a dynamic price mechanism stands an odds of 2 to 1 in increased financial savings. 

    Ladda ner fulltext (pdf)
    fulltext
    Ladda ner (jpg)
    presentationsbild
  • 4.
    Han, Mengjie
    et al.
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    May, Ross
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    Zhang, Xingxing
    Högskolan Dalarna, Akademin Industri och samhälle, Energiteknik.
    Wang, Xinru
    Pan, Song
    Da, Yan
    Jin, Yuan
    A novel reinforcement learning method for improving occupant comfort via window opening and closing2020Ingår i: Sustainable cities and society, ISSN 2210-6707, Vol. 61, artikel-id 102247Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    An occupant's window opening and closing behaviour can significantly influence the level of comfort in the indoor environment. Such behaviour is, however, complex to predict and control conventionally. This paper, therefore, proposes a novel reinforcement learning (RL) method for the advanced control of window opening and closing. The RL control aims at optimising the time point for window opening/closing through observing and learning from the environment. The theory of model-free RL control is developed with the objective of improving occupant comfort, which is applied to historical field measurement data taken from an office building in Beijing. Preliminary testing of RL control is conducted by evaluating the control method’s actions. The results show that the RL control strategy improves thermal and indoor air quality by more than 90 % when compared with the actual historically observed occupant data. This methodology establishes a prototype for optimally controlling window opening and closing behaviour. It can be further extended by including more environmental parameters and more objectives such as energy consumption. The model-free characteristic of RL avoids the disadvantage of implementing inaccurate or complex models for the environment, thereby enabling a great potential in the application of intelligent control for buildings.

  • 5.
    Han, Mengjie
    et al.
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    May, Ross
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    Zhang, Xingxing
    Högskolan Dalarna, Akademin Industri och samhälle, Energiteknik.
    Wang, Xinru
    Pan, Song
    Yan, Da
    Jin, Yuan
    Xu, Liguo
    A review of reinforcement learning methodologies for controlling occupant comfort in buildings2019Ingår i: Sustainable cities and society, ISSN 2210-6707, Vol. 51, artikel-id 101748Artikel i tidskrift (Refereegranskat)
  • 6.
    May, Ross
    et al.
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    Zhang, Xingxing
    Högskolan Dalarna, Akademin Industri och samhälle, Energiteknik.
    Wu, J.
    Han, Mengjie
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    Reinforcement learning control for indoor comfort: A survey2019Ingår i: IOP Conference Series: Materials Science and Engineering, 2019, Vol. 609, nr 6, artikel-id 062011Konferensbidrag (Refereegranskat)
    Abstract [en]

    Building control systems are prone to fail in complex and dynamic environments. The reinforcement learning (RL) method is becoming more and more attractive in automatic control. The success of the reinforcement learning method in many artificial intelligence applications has resulted in an open question on how to implement the method in building control systems. This paper therefore conducts a comprehensive review of the RL methods applied in control systems for indoor comfort and environment. The empirical applications of RL-based control systems are then presented, depending on optimisation objectives and the measurement of energy use. This paper illustrates the class of algorithms and implementation details regarding how the value functions have been represented and how the policies are improved. This paper is expected to clarify the feasible theory and functions of RL for building control systems, which would promote their wider-spread application and thus contribute to the social economic benefits in the energy and built environments.

    Ladda ner fulltext (pdf)
    fulltext
  • 7.
    May, Ross
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    The reinforcement learning method: A feasible and sustainable control strategy for efficient occupant-centred building operation in smart cities2019Licentiatavhandling, sammanläggning (Övrigt vetenskapligt)
    Abstract [en]

    Over half of the world’s population lives in urban areas, a trend which is expected to only grow as we move further into the future. With this increasing trend in urbanisation, challenges are presented in the form of the management of urban infrastructure systems. As an essential infrastructure of any city, the energy system presents itself as one of the biggest challenges. As cities expand in population and economically, global energy consumption increases and as a result so do greenhouse gas (GHG) emissions. To achieve the 2030 Agenda’s sustainable development goal on energy (SDG 7), renewable energy and energy efficiency have been shown as key strategies for attaining SDG 7. As the largest contributor to climate change, the building sector is responsible for more than half of the global final energy consumption and GHG emissions. As people spend most of their time indoors, the demand for energy is made worse as a result of maintaining the comfort level of the indoor environment. However, the emergence of the smart city and the internet of things (IoT) offers the opportunity for the smart management of buildings. Focusing on the latter strategy towards attaining SDG 7, intelligent building control offers significant potential for saving energy while respecting occupant comfort (OC). Most intelligent control strategies, however, rely on complex mathematical models which require a great deal of expertise to construct thereby costing in time and money. Furthermore, if these are inaccurate then energy is wasted and the comfort of occupants is decreased. Moreover, any change in the physical environment such as retrofits result in obsolete models which must be re-identified to match the new state of the environment. This model-based approach seems unsustainable and so a new model-free alternative is proposed. One such alternative is the reinforcement learning (RL) method. This method provides a beautiful solution to accomplishing the tradeoff between energy efficiency and OC within the smart city and more importantly to achieving SDG 7. To address the feasibility of RL as a sustainable control strategy for efficient occupant-centred building operation, a comprehensive review of RL for controlling OC in buildings as well as a case study implementing RL for improving OC via a window system are presented. The outcomes of each seem to suggest RL as a feasible solution, however, more work is required in the form of addressing current open issues such as cooperative multi-agent RL (MARL) needed for multi-occupant/multi-zonal buildings.

    Ladda ner fulltext (pdf)
    fulltext
  • 8.
    Han, Mengjie
    et al.
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    Zhang, Xingxing
    Högskolan Dalarna, Akademin Industri och samhälle, Energiteknik.
    Xu, Liguo
    May, Ross
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    Pan, Song
    Wu, Jinshun
    A review of reinforcement learning methodologies on control systems for building energy2018Rapport (Övrigt vetenskapligt)
    Abstract [en]

    The usage of energy directly leads to a great amount of consumption of the non-renewable fossil resources. Exploiting fossil resources energy can influence both climate and health via ineluctable emissions. Raising awareness, choosing alternative energy and developing energy efficient equipment contributes to reducing the demand for fossil resources energy, but the implementation of them usually takes a long time. Since building energy amounts to around one-third of global energy consumption, and systems in buildings, e.g. HVAC, can be intervened by individual building management, advanced and reliable control techniques for buildings are expected to have a substantial contribution to reducing global energy consumptions. Among those control techniques, the model-free, data-driven reinforcement learning method seems distinctive and applicable. The success of the reinforcement learning method in many artificial intelligence applications has brought us an explicit indication of implementing the method on building energy control. Fruitful algorithms complement each other and guarantee the quality of the optimisation. As a central brain of smart building automation systems, the control technique directly affects the performance of buildings. However, the examination of previous works based on reinforcement learning methodologies are not available and, moreover, how the algorithms can be developed is still vague. Therefore, this paper briefly analyses the empirical applications from the methodology point of view and proposes the future research direction.

    Ladda ner fulltext (pdf)
    fulltext
  • 9.
    Han, Mengjie
    et al.
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    May, Ross
    Högskolan Dalarna, Akademin Industri och samhälle, Mikrodataanalys.
    Zhang, Xingxing
    Högskolan Dalarna, Akademin Industri och samhälle, Energiteknik.
    Wang, Xinru
    Pan, Song
    Yan, Da
    Jin, Yuan
    A novel reinforcement learning method for improving occupant comfort via window opening and closingManuskript (preprint) (Övrigt vetenskapligt)
1 - 9 av 9
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • chicago-author-date
  • chicago-note-bibliography
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf