Publications
The monitoring and control of Critical Energy Infrastructure (CEI) is nowadays entrusted to Smart Grids (SGs). SGs rely on massive data and services to provide “awareness” about the status of the system. To do that distributed computing schemes have been applied based on decentralized communications, data collection, extractions, loading and analysis. These schemas are totally aligned with the Edge Computing (EC) paradigm. EC is an emerging paradigm that provides capabilities for processing and analyzing data away from the cloud, at the edge of the network closer to the source of the data. It offers multiple benefits including improved application performance, network latency reduction, and data locality. These characteristics reinforce EC is expected to have great impact on SG. However, a crucial aspect in implementing EC is company’s foundational technology to really progress in cyber, digital, and cloud moves for SGs. The authors strongly believe that the foundation for the successful implementation of cloud/edge-based solutions strictly depends on employing new core architectures based on modern advanced cloud-native solutions, i.e., patterns, tools, techniques, and technologies derived from cloud-based design. As a result of this statement an Edge Platform-as-a-Service (PaaS) has been designed, developed, and deployed and used as the foundation of a flexible data platform at the Edge made up of fast-deployable, open source, and free-to-use PaaS services
Energy management is crucial for various activities in the energy sector, such as effective exploitation of energy resources, reliability in supply, energy conservation, and integrated energy systems. In this context, several machine learning and deep learning models have been developed during the last decades focusing on energy demand and renewable energy source (RES) production forecasting. However, most forecasting models are trained using batch learning, ingesting all data to build a model in a static fashion. The main drawback of models trained offline is that they tend to mis-calibrate after launch. In this study, we propose a novel, integrated online (or incremental) learning framework that recognizes the dynamic nature of learning environments in energy-related time-series forecasting problems. The proposed paradigm is applied to the problem of energy forecasting, resulting in the construction of models that dynamically adapt to new patterns of streaming data. The evaluation process is realized using a real use case consisting of an energy demand and a RES production forecasting problem. Experimental results indicate that online learning models outperform offline learning models by 8.6% in the case of energy demand and by 11.9% in the case of RES forecasting in terms of mean absolute error (MAE), highlighting the benefits of incremental learning.
The rising digitisation of the energy system and related services is unveiling an enormous opportunity for energy stakeholders to leverage on Big Data & AI technologies for improved decision making and coping with challenges emerging from an increasingly complex and interconnected energy system. Initiatives in the field of Big Data Reference Architectures, like IDSA, GAIA-X or FIWARE provide generic frameworks to share, manage and process Big Data. Through alignment among them and integration of missing aspects, an interoperable and secure framework for the energy comes into view. The Reference Architecture presented in this paper moves towards this goal and will be instantiated in a set of concrete use cases within the European Energy Sector. Structurally inspired by SGAM and the BRIDGE Reference Architecture, it puts concrete analytics processes and data source components into context, taking important issues of Data Governance, Security, and Value Creation into account.
Energy Efficiency projects are often fragmented, of high transaction costs, and fall below the minimum value that many private financial institutions are willing to consider. The availability of comparable, anonymised historical data pooled from major market segments, structured along major project characteristics, can encourage greater investment flow in energy management and efficiency. The aim of this paper is to identify investment financing patterns in a pool of provided projects in Latvia and discover possible Grand Financing Plans (GFP) for future use. These GFPs could improve the procedure of decision making in energy sector in terms of the percentage of grand financing per project. The improvement of the process of grand financing can attract and mobilise private funding on such projects, providing investors/financiers (e.g., commercial/green investment banks, institutional/insurance funds, etc.) and project developers (public/local authorities, energy providers, ESCOs, construction companies, etc.) with data and tools to identify sustainable investment pathways and decrease the investment risk
Interoperability within a data space requires participants to be able to understand each other. But how do you get data space participants to use a common language? According to the IDS Reference Architecture Model (IDS-RAM)1 , the main responsibility for this common language lies with an intermediary role called a vocabulary provider. This party manages and offers vocabularies (ontologies, reference data models, schemata, etc.) that can be used to annotate and describe datasets and data services. The vocabularies can be stored in a vocabulary hub: a service that stores the vocabularies and enables collaborative governance of the vocabularies. The IDS-RAM specifies little about how vocabularies, vocabulary providers and vocabulary hubs enable semantic interoperability. The hypothesis that we address in this position paper is that a vocabulary hub should go a step further than publishing and managing vocabularies, and include features that improve ease of vocabulary use. We propose a wizard-like approach for data space connector configuration, where data consumers and data providers are guided through a sequence of steps to generate the specifications of their data space connectors, based on the shared vocabularies in the vocabulary hub. We illustrate this with our own implementation of a vocabulary hub, called Semantic Treehouse.
Mainstreaming energy efficiency financing has been considered a key priority during the last decade among several stakeholders. The capability offered by Multicriteria Decision Analysis to integrate cross-domain financial and energy consumption data, combined with statistical analysis techniques and data abundance, contributes to building the necessary market confidence in energy efficiency projects and make them an attractive investment asset class. In this context, the aim of this paper is to propose a solid methodological framework in order to support the financing procedure of energy efficiency investments, and to identify improved grant financing plans, considering a series of factors which are of vital importance for the sustainability of such actions and the limitation of investment risk.
A decision support tool, developed in Python, is presented which implements the suggested methodology, improving the decision making for the investor in terms of the percentage of grant financing per project. The developed methodology has been applied on a reliable dataset of energy efficiency projects from several cities in Latvia, where the actual performance of the investments is exploited. The application of the methodology has resulted in a financing plan which achieves about the same energy savings, while bringing 15% reduction of the energy efficiency investments’ cost.
Accurately forecasting solar plants production is critical for balancing supply and demand and for scheduling distribution networks operation in the context of inclusive smart cities and energy communities. However, the problem becomes more demanding, when there is insufficient amount of data to adequately train forecasting models, due to plants being recently installed or because of lack of smart-meters. Transfer learning (TL) offers the capability of transferring knowledge from the source domain to different target domains to resolve related problems. This study uses the stacked Long Short-Term Memory (LSTM) model with three TL strategies to provide accurate solar plant production forecasts. TL is exploited both for weight initialization of the LSTM model and for feature extraction, using different freezing approaches. The presented TL strategies are compared to the conventional non-TL model, as well as to the smart persistence model, at forecasting the hourly production of 6 solar plants.
Despite the large number of technology-intensive organisations, their corporate know-how and underlying workforce skill are not mature enough for a successful rollout of Artificial Intelligence (AI) services in the near-term. However, things have started to change, owing to the increased adoption of data democratisation processes, and the capability offered by emerging technologies for data sharing while respecting privacy, protection, and security, as well as appropriate learning-based modelling capabilities for non-expert end-users. This is particularly evident in the energy sector. In this context, the aim of this paper is to analyse AI and data democratisation, in order to explore the strengths and challenges in terms of data access problems and data sharing, algorithmic bias, AI transparency, privacy and other regulatory constraints for AI-based decisions, as well as novel applications in different domains, giving particular emphasis on the energy sector. A data democratisation framework for intelligent energy management is presented. In doing so, it highlights the need for the democratisation of data and analytics in the energy sector, toward making data available for the right people at the right time, allowing them to make the right decisions, and eventually facilitating the adoption of decentralised, decarbonised, and democratised energy business models.
This study introduces an energy management method that smooths electricity consumption and shaves peaks by scheduling the operating hours of water pumping stations in a smart fashion. Machine learning models are first used to accurately forecast the electricity consumed and produced by renewable energy sources on an hourly level. Then, the forecasts are exploited by an algorithm that optimally allocates the operating hours of the pumps with the objective to minimize predicted peaks. Constraints related with the operation of the pumps are also considered. The performance of the proposed method is evaluated considering the case of a Greek remote island, Tilos. The island involves an energy management system that facilitates the monitoring and control of local water pumping stations that support residential water supply and irrigation. Results indicate that smart scheduling of water pumps in a small-scale island environment can reduce the daily and weekly deviation of electricity consumption by more than 15% at no monetary cost. It is also concluded that the potential gains of the proposed approach are strongly connected with the amount of load that can be shifted each day, the accuracy of the forecasts used, and the amount of electricity produced by renewable energy sources.
Energy efficiency is critical for meeting global energy and climate targets, requiring however significant investments. Due to the lack of mature decision-support systems and the utilization of traditional investment mechanisms that focus on the economical aspects of the energy efficiency projects and neglect their environmental impact, such projects can experience difficulties in being funded. In the interim, the impact of the digitization era is more apparent than ever, as algorithms and data availability and quality have significantly improved. This study aspires to bridge the gap in energy efficiency financing with the development of a data-driven methodology that labels energy efficiency investments based on their expected utility in terms of renovation cost and energy savings. Various machine learning classification methods are deployed and combined through a meta-learning model with the objective to improve overall classification performance and determine the funding that each investment should receive according to its particular characteristics. The proposed methodology is evaluated using a set of 312 projects that have been completed in Latvia. Our results indicate that the meta-learner outperforms all baseline classifiers, effectively identifying projects of high and medium potential and successfully distinguishing low from high potential ones.