An Empirical Characterization of Event Sourced Systems and Their Schema Evolution -- Lessons from Industry

Event sourced systems are increasing in popularity because they are reliable, flexible, and scalable. In this article, we point a microscope at a software architecture pattern that is rapidly gaining popularity in industry, but has not received as much attention from the scientific community. We do so through constructivist grounded theory, which proves a suitable qualitative method for extracting architectural knowledge from practitioners. Based on the discussion of 19 event sourced systems we explore the rationale for and the context of the event sourcing pattern. A description of the pattern itself and its relation to other patterns as discussed with practitioners is given. The description itself is grounded in the experience of 25 engineers, making it a reliable source for both new practitioners and scientists. We identify five challenges that practitioners experience: event system evolution, the steep learning curve, lack of available technology, rebuilding projections, and data privacy. For the first challenge of event system evolution, we uncover five tactics and solutions that support practitioners in their design choices when developing evolving event sourced systems: versioned events, weak schema, upcasting, in-place transformation, and copy-and-transform.


Introduction
Software systems are increasing in complexity, used in increasingly critical processes, and serve increasing numbers of end-users. Architectural patterns enable engineers to built these systems using knowledge acquired by other engineers. Influential books such as Patterns of Enterprise Application Architecture by Fowler (2002) and Enterprise Integration Patterns by Hohpe and Woolf (2004) demonstrate the impact of pattern descriptions on software engineering. Architectural patterns are part of the trend of knowledge based architecture design; Li et al. (2013). Kassab et al. (2018), Taibi et al. (2018), and Harrison et al. (2007) show how patterns are instrumental in the capturing of architectural design decisions. In this article, we describe such a pattern in detail and provide the design decisions that were employed in practice, with the goal of providing a comprehensive source of knowledge for practitioners.
Recently, the event sourcing pattern has become a popular answer to the challenges of complex, mission-critical, scalable systems. Examples of organizations that apply event sourcing are Netflix (Avery and Reta, 2017), and Walmart's Jet.com (Gorodinski, 2017), with the goal of creating scalable and reliable critical systems. Event sourcing is informally described by Fowler (2005) as a pattern that "ensures that all changes to application state are stored as a sequence of events." Flexibility, debug-ability, and reliability are given by Avery and Reta (2017) as rationale for using event sourcing. Debski et al. (2017) and Erb and Hauck (2016) show how event sourcing can be applied to achieve scalable, reactive systems. Kabbedijk et al. (2012) describes event sourcing as a sub pattern of Command Query Responsibility Segregation (CQRS) in his work on the improved variability and scalability of systems applying CQRS.
The events in event sourcing, as opposed to general eventdriven architectures (EDAs) (Fowler, 2017), are stored as an append-only log of all state changes. Two key characteristics separate event sourcing from event-driven approaches, such as stream processing, transactional processing, and blockchain. First, events in Event Sourced Systems (ESSs) are stored as the state of the application.
Other approaches use the events to communicate, while the communication aspect comes second in ESSs. The second difference is that events are closely related to events occurring in real world business processes. This allows event sourcing to be also used as a design approach. Domain-Driven Design (DDD), as described by Evans (2003), advocates events as a design tool for the process flow of a software system. Brandolini (2018) proposes event storming (analogous to brainstorming), a group design process that focuses on the events that take place in a software system. Further details on these analogous approaches are found in Section 3.
Although event sourcing is related to existing ideas such as EDAs, the pattern itself has not yet been thoroughly studied. Most knowledge exists in so-called 'grey literature': practitioner blogs, and anecdotal experience reports. In previous work (Overeem et al., 2017), that focused on the evolution of ESSs, we experienced this lack of literature. This work fills this gap by deriving an integral description of event sourced systems through interviews with 25 engineers. Together with this description we identify four categories of rationale for the application of event sourcing, such as a decrease of complexity. In this "In Practice" submission, we also identify five engineering challenges around the pattern, with schema evolution being one of the most complex challenges. With the pattern description and its liabilities presented in this article, we enable engineers to make a considered choice. Our work is not dissimilar to the work of Musil et al. (2015), who conducted an extensive study on collective intelligence system pattern variations, with the goal of enabling architects to predict the outcomes of different design decisions. Similarly, Slotos (2016) describes the Star pattern for enabling flexible business applications, also with the goal of supporting software architecture researchers and practitioners and promoting the pattern itself.
Our study regards a new research area, therefore, we apply Grounded Theory (GT). Adolph et al. (2011) describe GT as a useful approach for research in areas that have not previously been studied.
A GT explains how people resolve their main concern by employing a certain process. That process is called the 'core category' of the GT. The core category of the work presented in this article is the process of designing and implementing event sourced systems, as performed by software engineers. The theoretical definition of event sourcing helps both researchers and practitioners to understand, reason about, and teach the pattern and its consequences. Section 2 explains how we applied GT to form a basis for conceptualization of ESSs from 25 interviews, and how the three essential elements are covered. From the gathered data we distill the pattern description and its consequences. This work has the following contributions: • Section 3 contrasts ESSs with other existing architectural patterns, such as EDAs and blockchain, and shows that ESSs are insufficiently described in existing literature.
• Section 4 describes the rationale for using ESSs: they provide audit functionality, are highly flexible and scalable, enable the development of highly complex systems, and are a current trend. The overview of 19 different ESSs elaborates on the context of the pattern, showing that event sourcing is applied in different kinds of systems, from small to extremely large.
• Section 5 provides a thorough description of ESSs based on the findings of the interviews, presenting the pattern itself including its relation to CQRS. It also reflects on the role of the (implicit) schema present in ESSs.
• Section 6 presents the engineering challenges surrounding the use of the pattern, that engineers encounter during the development of ESSs, such as a steep learning curve, poor ESSs performance, and deal-ing with privacy regulations such as the General Data Protection Regulation (GDPR).
• Section 7 focuses on of the most prominent challenge encountered in ESSs: schema evolution. Five empirically established methods are presented that support ESS evolution. We advise that systems should start out using versioned events and weak schema, while later evolving to upcasting and even copy-and-transform techniques.
The validity threats of this work, such as the fact that the interviewees were pragmatically collected, are discussed in Section 9. We conclude that ESSs enable complex scalable systems with auditing capabilities and that our theoretical definition enables further research and development of these systems.

Research Approach: Constructivist Grounded Theory
In our early literature search, we identified that there is little academic material available when it comes to the topic of event sourcing. Grounded Theory (GT) is defined as a systematic methodology involving the construction of theories through methodical gathering and analysis of data. Adolph et al. (2011) explain how GT is particularly useful for research in areas that have not been studied before. Our investigation of ESSs has an exploratory nature, therefore, we use GT to structure our research approach. Furthermore, we aim to inspire researchers to experiment with novel approaches to gathering architecture knowledge.
GT is a common research strategy in software engineering research and induces theory from empirically collected material, such as through interview or case studies. For instance, Hoda et al. (2012) explore the practices of selforganizing agile teams using GT. Greiler et al. (2012) apply GT to improve the understanding of testing practices for plug-in systems. Tamburri and Kazman (2018) recover software architectures by applying GT. Last, Santos et al. (2019) study common vulnerabilities in plug-and-play architectures through GT.
Similarly, we use GT to explore event sourcing, and improve our understanding of the pattern, the applications, and the challenges. Constructivist GT assumes that neither data nor theories are discovered, but are constructed by the researchers out of the interactions with the field and its participants. Data are co-constructed by researchers and participants, and coloured by the researchers' perspectives, and values. Within this approach, a literature review is used in a constructive and data-sensitive way without forcing it on data. We have employed constructivist GT (Charmaz, 1996) in our research; we knew we would find a description of the pattern, but were not aware what other concepts, challenges, and motivations would be identified.

Research Questions and Motivation
The motivation of our research is formed by five years experience in the development of an event sourced system and earlier research on schema evolution in ESSs (Overeem et al., 2017). This experience guided our research and the direction of our exploration. Effectively, our previous work is also part of the GT data set, and has been translated directly into the research protocol. The main goal of the research project was to come to a cohesive theory around the event sourcing architecture pattern. The research questions guided the research and were formulated, as per constructivist GT, a priori, but evolved to the following final set: RQ1 What types of systems apply event sourcing and why?
RQ2 How can event sourced systems be defined?
RQ3 How can event sourced data structures be evolved?
RQ4 What are the challenges faced by practitioners in applying event sourcing?
Our previous study in the domain (Overeem et al., 2017) gained significant industry interest, which led us to attend many industry events, where we were often invited as keynote speakers. This provided us extensive access to practitioners in the field, who would offer their support and advice. Through these rich interactions it became obvious that an extensive interview study could lead to new results and research challenges in the domain.
Foundations for the Study. While in GT it is recommended that the researchers do not perform an extensive literature study before the research project, many have acknowledged that this is almost impossible and at times even impractical (Stol et al., 2015;Charmaz, 1996). As little academic literature was available, it was easy to fulfill this major GT guideline. This research project was started after we had already published in this domain (Overeem et al., 2017) ourselves. We made our previous work part of the initial data set and also included the works of Fowler, e.g. (Fowler, 2017). The main concepts were extracted from these works and subsequently used to create an interview protocol. Throughout the project, as we gathered new evidence and encountered new concepts, we performed exploratory literature study projects for each. Furthermore, if the interviewees mentioned an academic paper, it became part of our literature set. New concepts were extracted from this literature and integrated with the interview protocol where necessary. The literature was explored by snowballing forward and backward one level.

Sampling and Interviewees
The interviewed engineers volunteered to contribute to our research after being invited through different channels. Based on our experience in developing ESSs in the past years we identified the primary locations through which the event sourcing and DDD community communicates. We invited the engineers through channels such as Google Groups and Slack channels. In addition to this open invitation, we explicitly contacted and invited a number of well known community members. We executed interview snowballing, a process similar to snowballing in systematic literature studies Wohlin (2014): we explicitly asked every interviewee for further references. The interviewees were not compensated for their cooperation.
Our direct and indirect invitations resulted in interviews with 25 engineers. The engineers are event sourcing practitioners in the roles of developers, architects, and product owners. A number of these engineers were consulting with the company, while others were employed by the company. The consultants operate as external advisers (in addition to being hired as developer or architect) and are hired by multiple companies because of their experience. Table 1 summarizes the engineers, including their role, years of experience with ESSs, the number of ESSs they worked on. Combined they have 103 years of experience, with an average of four years per engineer. For two of the engineers (E14, E16) it is hard to tell how many systems they worked on over the years, because their consultancy work exposed them to many different systems. A number of the engineers worked on the same system(s), and were interviewed together. We conducted 22 distinct interviews with the 25 engineers. Three interviews were conducted with two engineers together as these engineers worked on the same system. In the case of E4 and E5, and E20 and E21 the engineers had a different role, and their experiences complemented each other during the interview. Engineers E9 and E10 shared their role, and their answers showed more overlap. The systems are discussed in Section 4. We will refer to the engineers by the number given to them in Table 1.

Interview Techniques and GT
Each interview took 30-90 minutes, either in person or via video conference. The protocol presented in the Appendix A was created using the guidelines of Castillo-Montoya (2016). During the interviews, we asked open-ended questions exploring event sourced systems. The questions asked during the interviews were based on a protocol that is downloadable with the interview transcripts (Overeem et al., 2021).
The protocol was followed freely: the answers given by the engineers guided the interviews. The four protocol parts remained stable over the interviews. Some of the interview questions were sharpened and added as the interviews progressed, a technique encouraged by practitioners of GT. The protocol used at the last interview is presented in the appendix. The first part of the interview focuses on the context of the event sourced system and the engineer: what are the characteristics of the system, and why is event sourcing applied. Versioning of event sourced systems is discussed in the second part of the interviews, based on our experience in this topic we identified this as an important challenge. The third part deals with the relation of event sourcing with CQRS, DDD, and other challenges. Finally, we discuss whatever the engineers thinks should be discussed in relation to event sourcing.

Coding, Analysis, and Creativity
Each interview transcript was analyzed, as part of the GT approach, through an open coding process. The interviews were conducted by the first author, the transcripts were reviewed by another author after creation. The first and second author performed the codification and categorization, while the third author validated and confirmed the steps. The authors maintained a shared memo-ing document where ideas and emerging concepts were noted for discussion with all co-authors. Disagreements in the codification and categorization were resolved through discussions among the authors until agreement was found, while older versions of concepts were maintained in the memo-ing document. The coding process was both organic and methodical. We provide an example of the coding process. One of the concepts that was discussed extensively was that of auditing and the ability to have a change log for all events in the system. E2: it "has saved the finger of blame from pointing at us so many times ... that bit is worth its weight in gold to me." E4, translated: "I would save the old version forever ... for if we end up in court." Many of the interviewees put equal emphasis of the role of the audit log. The paragraphs from the transcripts mentioning the audit log were first coded and linked to the concept audit. From those codes audit emerged as one of the prevalent rationales behind the pattern. After further grouping the statements linked to audit we added more detailed codes, particularly addressing specializations of this rationale such as customer service support and regulations.
This example explains how we started with highlighting important paragraphs and sentences in the transcripts. Those highlights were coded with short summary sentences. After that the sentences were grouped by linking them to codes: topics described by a few words. From those codes we derived concepts, such as the before mentioned audit, which was later related to the category rationale. During this process we iterated until we ended with simplified categories and concepts (also known as the parsimony principle) that reflected the linked paragraphs. This process was iterative and organically executed until the first and second authors agreed on the categories and concepts. While we cannot claim that saturation was reached, this article is a presentation of the coherent concepts that emerged from the research. The nature of our study is exploratory and the research questions are broad on purpose. To reach saturation on such a large topic one would have to conduct, transcribe, and codify an impractical number of interviews. Although saturation based on the codes and concepts was not reached, we are confident that the results we present represent the overall sentiment among practitioners. While we always had the concepts of how to present an architecture pattern in the back of our minds, we decided to structure the presentation according to the results of the GT concepts and codes. The guidelines as, for instance, stated by Gamma et al. (1995) on describing a pattern through the elements problem, solution, and consequences were used during the memo sorting process to match our concepts, but not as a predefined framework in which our concepts were painstakingly framed. Section 8 discusses the relation between our concepts and the guidelines of Gamma et al. (1995). The categories, concepts, and codes found during the interviews are presented in Sections 4,5,6,and 7. Tables 3,2,4,5,6,and 7 summarizes the results.
The interview protocol, the anonymized transcripts of the interviews, and the classification codes with links to the interviews are made available as data package (Overeem et al., 2021).

Background
The foundational idea of event sourcing is the domain event as described by Evans (2015). His seminal book on Domain-Driven Design (DDD), however, does not mention the pattern. Vernon (2013) describes event sourcing only briefly in his book on the implementation of various DDD patterns. Young (2017), as one of the original proposers of event sourcing, discusses the challenge of versioning ESSs. Event sourcing is also discussed in the context of CQRS Young (2010), a pattern strongly related to event sourcing. Recent academic literature (Erb, 2019;Zhong et al., 2019) shows an interest in applying event sourcing for research projects.
Three related areas and their differences with respect to ESSs are discussed: transactional processing and database systems, stream processing and EDAs, and blockchain.
Event sourcing is related to database systems techniques used for persistence guarantees and replication. Gray and Reuter (1992) describe how transaction logs can be used to replicate state between database systems. Every state change is recorded as a transaction, which is similar to event sourcing where every state change is recorded as an event. Kleppmann (2017) discusses event sourcing in the context of dataintensive applications, he relates the pattern to the change data capture approach, often used in Extract-Transform-Load (or ETL) processes (Vassiliadis, 2009). ETL solutions are often used for creating data warehouses. The primary difference between event sourcing and these techniques is that a transaction or a data change is a technical entity without relation to the real world, while an event in event sourcing resembles an event in the real world.
Kleppmann also relates event sourcing to the chronical data model described by Jagadish et al. (1995), Time series, as described by Dreyer et al. (1994), is another data model that deals with the temporal aspects of data. Both techniques are only used as a data modelling technique, while event sourcing is a software architecture pattern.
Event sourcing also shares commonalities with stream processing (Wu et al., 2006), applied in for instance Internet of Things (IoT) systems to process sensor events. Events in IoT systems are often used to communicate between different (sub)-systems, and are not stored as the state of the system. Also, the events represent technical events such as sensor data as opposed to real world business domain events. Another closely related topic is Complex Event Processing (CEP) as described by Luckham (2011). In CEP the focus is on pattern recognition within a stream of events. CEP itself could be applied in the processing components within an ESS, similar as the event calculus formalism. Event calculus, as described by Sadri and Kowalski (1995), is a logical language that represents the effects of events. This language, however, cannot be used to describe event sourcing as an architectural pattern. Simarly, process mining deals with the analysis of event logs from process-driven systems. The work of de Murillas et al. (2015) shows the complexity of mining processes from systems that do not record historical data. ESSs support process mining by default, which makes them suitable for enterprise systems. Anh et al. (2018) describes another append-only data structure: blockchain. While the data structure is similar to event sourcing, the goals of the two techniques are different. A blockchain focuses on solving problems related to distribution, consensus, and trust, while event sourcing solves problems with history, temporal complexity, and audit trails. The blockchain approach enforces the immutability of the data to solve its problems, while in event sourcing this immutability is self imposed. Event sourced systems could be build using a blockchain solution. However, the distribution and consensus features offered by blockchain do not improve the goals targetted by event sourcing.

Event Sourcing In Practice
The 25 interviewed engineers have an accumulated experience of at least 35 event sourced systems (ESSs). However, a number of those systems were either not yet in production, or the engineer could not recall enough details of the system. Of the 35 systems, 19 ESSs were discussed in more detail and are summarized in Table 2. Still, the experts experience on all of these systems is reflected in the answers that they gave, and is thus reflected in the challenges, the definitions, and the schema evolution techniques. The categories in this characterization are based on the interviews, and were selected based on the categorization of the concepts deduced from the interviews.
Event sourcing is applied in enterprise applications, either business-to-business or business-to-consumer, as illustrated by the interviews. We did not encounter systems using event sourcing for IoT systems, or other stream processing systems. This is in accordance with the community from which event sourcing originated, which focuses on enterprise applications.
The systems overview shows that the event sourcing pattern is not tied to a particular technology stack. This diversity in technology confirms that event sourcing is indeed a pattern, and not a technology.

Rationale for ESSs
The reasons for applying event sourcing can be grouped into four categories. Remarkably, all systems under study benefit from event sourcing, and no system returned to a current state model. Still, most engineers state that they would not apply event sourcing in every system. The reason given for this opinion is the added complexity of introducing event sourcing. Engineer E2 would apply event sourcing by default, because of the benefits it gives. The different rationales as discussed with the engineers are summarized in Table 3.
One of the main benefits of applying event sourcing is the retention of all state changes. According to E24, event sourcing prevents prematurely data deletion: "as a software developer building a data-driven system and you are modifying data, you are essentially destroying your older copy of the data. And who told you you're allowed to delete data?" We classified this group of rationale with the category audit (Van Der Aalst et al., 2010) (9/19 systems). Compliance with regulations (such as system ProjectSys) is one of the reasons in this category. Improving customer support (Pro-jectSys, Advert1Sys) is another reason. In those systems the state changes are used to explain the system and its behavior to customers. Finally, simply explaining why and by whom data is changed (in debug scenarios for instance) is given as reason too (EmailSys).
The second category is flexibility (Lassing et al., 1999) (12/19 systems). These systems chose event sourcing (and CQRS), because of the flexibility it provides in the architecture of the system.
Examples of this flexibility are the creation of secondary indexes for search (VideoSys), building and refreshing caches (B2CSys), replacing event queues (MarketingSys, WebBuildSys, LendingSys), and scaling out to multiple read databases (VideoSys). Section 5 explains how this flexibility is achieved through the implementation of different projections and projectors. The third category is complexity (Biemans et al., 2001) (4/19 systems). These applications were considered to contain complex business logic, heavily process driven instead of data driven. Therefore, the architects designed the system as an event-driven system, starting out with the modelling of processes instead of data.
The final category, and only the rationale for three of the 19 systems, is trending (Clements, 1997) (3/19 systems). The systems PaymentSys, P-PaySys and Advert2Sys started with event sourcing, because the (lead) architects picked up on a trend. They were curious to the details of the pattern, and started to implement it in the new system. In hindsight, the systems did benefit from this decision, although E9, E10, and E12 ascribe this to luck, and not to the design practices.

Characteristics of Event Sourced Systems
The core category of the GT process is the process of designing and implementing event sourced systems, as performed by software engineers. As we needed to make sure that event sourced systems are not a technology but a tech-nology agnostic pattern, we wanted to assure the types of applications and the technology platforms used to realize the implemented systems. Three dimensions, the size of the event store, the workload handled by the application, and the size of the schema, are listed to indicate what kind of systems benefit from event sourcing. These dimensions assure that event sourcing is not biased towards systems of a certain size. Three related topics emerged during the coding process: DDD as software design approach, CQRS being a related architecture pattern, and the Microservice Architecture (MSA) style. Together with the degree of immutability and the type of application, these different aspects from the characteristics that are listed in Table 2.
Event sourcing is a pattern that stores every state change, immutability is thus at the core of the pattern. Helland (2015) states that immutability of data is a crucial aspect for distributed systems. Although often seen as the defining characteristic of event sourcing, immutability is not enforced in any manner, as opposed to a blockchain. In a number of the systems under study, immutability is sacrificed for a simpler Table 3 The rationales given by the engineers, categorized in four concepts: audit, complexity, flexibility, and trend. schema evolution technique (see Section 7). We observed different degrees of immutability. The first degree is strict, 8 out of the 19 ESSs never change an event. The second degree of immutability is used by 3 out of 19 systems, which allow for cut-off moments. In such a cut-off moment, the event store is changed, but back-ups guarantee that no information is deleted. The goal of these back-ups is to satisfy regulations or service-level agreements, therefore, they are kept around forever. This degree of immutability still guarantees an audit trail, because the back-ups can be used to retrieve all the state changes. The last degree level of immutability is mutable, 8 out of 19 systems allow events to change. In these systems, the event store is changed on some occasions, and the backups are not kept forever. These systems do not satisfy the goal of a complete audit trail. However, the events can still be used to explain how the current state was reached. None of the ESSs lose information regarding the current state of a system. Events that are changed, or transformed, are in most cases changed because of technical reasons. In 14 of the 19 ESSs under study DDD is used as the design approach. DDD is an approach to software development that aims at tackling complexity in the heart of software (as the subtitle of the seminal book by Evans (2003) states). DDD focuses on the explicit modeling of the domain, including its boundaries and events. However, only four of the 25 engineers argue that DDD is a prerequisite for event sourcing. Although the other engineers do not see DDD as a prerequisite, without a doubt DDD inspired the design of many ESSs. Events, as expressed by E11, "should represent real world business events". This is different from transactional processing, or stream processing. In those systems events can have a more technical nature. An ESS that contains events not representing real world business domain events will undergo more changes to the software according to E11. E11 explains: "You align the events with real world events, so you are dealing with changes that have a native equivalence. Doing DDD leads to a less fragile design." For E16, the understanding of the domain is a prerequisite for doing event sourcing: "A high level of maturity of the domain knowledge is a prerequisite. When the domain knowledge is still evolving, applying event sourcing introduces more risk." CQRS is a closely related pattern that also originated from the community around the DDD approach (the pattern itself will be explained in more detail in Section 5). Although engineer E14 has seen a few solutions that apply CQRS without event sourcing, they are almost always used together. All of the systems that we discussed with the engineers applied both CQRS and event sourcing. The interviews give no explanation for this co-appearance. A possible explanation, based on the experience of the authors, could be the fact that they are often 'advertised' together in the community.
Also closely related to event sourcing is the MSA (Dragoni et al., 2017) style. Similar to DDD, the MSA style also attacks the complexity of large software systems. This is confirmed by 8 of the 19 systems that were discussed in the interviews. They implement microservices to break up a large application and control complexity by spreading the business logic over these services. We observed two approaches in the systems that combine MSA and event sourcing. The first approach uses event sourcing as an implementation detail of the microservices. In the second approach, the events are not only used to store state changes, the event store is also used to communicate these events between microservices.
Unfortunately, the experts could not uniformly report on event store size, traffic, and schema size of the characterized ESSs. Some of them could not disclose these details due to commercial reasons, while others no longer had access to the discussed system. Table 4 summarizes the details that were reported per discussed system. The systems have a size ranging from smaller than three gigabytes, up to 250 gigabytes (or more than a billion events). Eleven systems (including HealthSys that reports a growth rate of 4 million events per day) have more than a million events in the store, representing more than half of the systems. Two systems (WebBuildSys and InventorySys) even report sizes over a billion events. Advert2Sys shows a small event store size, but that is due to the active pruning that they do. The growth of 4 million events per day shows that the total number of events is much higher than the reported five million. The growth rate of the systems shows that a number of systems report a growth that passes the million events per day (HealthSys, Advert2Sys, and InventorySys), but most show a number far less than a million new events per day. The schema sizes shows that none of the reported systems passes the 500 event types, but is rather spread out between 20 and 450 types. Overall, Table 4 shows a wide variety of event store sizes, handled traffic, and event store schema sizes. Systems VideoSys, PaymentSys, ApproveSys, and InventorySys show that ESSs are not only used for small business domains. And the event store size shows that event sourcing can be used for both small and large systems.

Event Stores and Event Sourced Systems
This section defines key concepts and operations in an event sourced system (ESS). These definitions are based on our experience building ESSs, and confirmed by the interviews that were conducted. They are used to conceptualize event sourcing and the identified challenges. When coding the interviews, different characteristics and variability of the concepts and operations were identified, which are described in this section. These concepts and operations should be used in discussing, and teaching ESSs.

The Event Store
We propose the following definitions for the concepts and operations related to an event store. First the concepts are defined, starting with events all the way up to the store. After that the operations on the event store are given.

Event. An event is a discrete data object specified in domain terms that represents a state change in an ESS.
An example of an event from the Netflix case (Avery and Reta, 2017) that represents a real world business event is given in JSON format: { "LicenseCreated": { "customerId": "BlackMirror", "titleId": "TheNationalAnthemS01E01", "date": "2014-01-06" } } The importance of the relation to the business domain is stated by E5: "business analysts are telling us what the events should be." E11 adds "you capture business changes as a flow of events, you align these events with real world events." A more general definition is given by Michelson (2006): "a notable thing that happens". It lacks the relation to the business domain as it is used for event-driven architectures in general. The data in the events can be stored in different formats such as JSON, XML, AVRO (The Apache Software Foundation, 2019), or Protobuf (Google Inc., 2019). Events are stored in a sequence, in event streams.
Both E14 and E25 do see a distinction between internal and external events. Internal events are fine-grained and contain more detail, while external events are more coarse grained and meant for other systems to communicate. Through this distinction it is possible to hide internal business logic from external consumers. Multiple engineers (E12, E14, E16, E17, and E22) also acknowledge the usefulness of state propagation through events. Instead of events that mark a business event, events can also be used to simply propagate the state of an object. The sequence numbers are not handed out by the event stream, but are supplied by the producer of the new events.
The event stream does validate if the sequence numbers are consecutive natural numbers. E3 explains how this is used by event subscribers: "you get this monotonically increasing sequence of events that you can use to record your position." The streams together are stored in the event store.

Event
Store. An event store is a set of event streams. These streams form the partitions of the event store, and are disjoint.
The event store has two foundational operations on event streams: and . The operation enables systems to read an event stream from a given sequence number. Events are appended to the event stream with the operation . E20 explains how append is the only operation that changes the event stream: "I only append new events, and never throw away old events." The operation has an extra validation: the caller should supply the sequence number for the new event, which is validated and an error is returned if it is not the expected number. Through this validation the store achieves optimistic concurrency control. According to engineer E24, this is the strongest guarantee that the event store should offer. A caller will first need to from the event stream, before can be called. If another caller calls in between, the of the first caller will fail, because the highest sequence number has changed.
Both the and operation operate on single streams, this emphasizes the fact that the streams in an event store are disjoint. The function can either append a single event, or multiple events, depending on the implementation. For instance Event Store (2019) implements the function with a version that atomically appends multiple events to the stream.

The Event Sourced System
Enterprise software applications support at least two foundational use cases: storing information and retrieving information. The event store is used to store the state changes in the system, however, the event store is not optimized for retrieving information.
In ESSs the function is central in both storing new information and retrieving of information. First we define and characterize this function. Second we discuss storing and retrieving information by presenting two parameterized operations.
Project function. The project function takes one or more event streams and creates a projection with the data from the given events. The projection itself can take different forms, for instance it can be a relational database is updated through SQL statements, or a search index manipulated through the filesystem.
The function operates on one or more event streams. The event streams are disjoint, and the function thus can not assume an order between the events from the different streams. While the order of events in a single stream is guaranteed, the events from different streams have no relation.
The projection that is built by the function in an ESS is similar to the concept of projections in relational algebra (Date, 2003): projections contain a selection of the data present in events. Projections are similar to views in a relational database: a selection and transformation of one or more database tables.

Projections.
A projection is a selection of the data stored in events, transformed into a specific model. The selection and transformation depends on the purpose of the projection. The data in a projection is transient, a projection can be rebuilt from its source events at any point.
Examples of different variations of projections are frequently given by the engineers. Engineer E6 for instance explains how they project the event data to both Neo4J (a graph database) and ElasticSearch (a document database). The graph database serves the navigation through the data, while the document database serves the search functionality. Other examples given are a specific storage technology for indexing (used by for instance by E8, E12, E23), an analysis to report abuse of accounts information, and a relational table with all issued licenses for downloaded content.
The primary design question of the function and its target is its purpose. The importance of the function lays in its encapsulation of the variability in storage technology, data selection, and data model. Choices can be made per project function, which enables a huge potential for optimized projections for their purpose. The flexibility as reason for choosing event sourcing (Section 6) is in large part caused by the function. The function also poses a risk for the performance of the system, a challenge we discussed in Section 6. The time it takes to build a projection depends on two factors: the number of events that are read and the time it takes to update the projection. Engineers E11, E13, and E14 discuss their search for improved implementations of projectors. Quick improvements can be found in faster storage technology, or better use of hardware. Engineer E12 explains how they prune the event stream by moving older events into a different stream. This pruning decreases the number of events that the function needs to process, making the rebuilding faster. Engineer E14 discusses how they rather plan the rebuilding in weekends, instead of investing developer effort for optimization.
The retrieval of information from the event store is done by building a projection. Queries are answered using the data available in the projection. Projectors can build the projection on-demand, or opportunistic: the given projection is build first and then the specific query is answered. However, it is also possible to pre-build the projection: the projector constantly watches the event streams and updates the projection whenever new events arrive. This decision depends on the ratio between reads of the projection and new events being appended to the stream. If a projection is read infrequently, it is unnecessary to constantly project new events, and thus consume resources. However, if a projection is read frequently projecting the new event directly on arrival improves the performance of the query.
The behavior of the projector is similar to that of the higher-order function fold (Hutton, 1999), a recursion operator that works on lists, as stated by Meißner et al. (2018). The projector folds over the specific event streams and creates a projection. The integration of functional programming and domain-driven design is further explored by Wlaschin (2018). Storing new information is done using the operation. The operation is the only operation that is capable of storing new events in the store. However, before storing these new events, they have to be produced. Events in an ESS are produced as a result of an action (the commonly used name is command) that is accepted by the system. The validation, resulting in an acceptance or rejection of the command is done by the function.

Accept function. The accept function takes a projection and an command . The command is validated using the data in the projection, and the accept function either results in an error or in an event.
The command follows the Command pattern described by Gamma et al. (1995). The system first builds a projection, and then validates the command using the function. Validation of the command can result in either an new event or an error (in case of a validation error). The new event is appended to a specific event stream, which is selected based on properties present in the command. This appended event is the new information stored in the system. While the projection is built in order to validate the command, it is only used to validate the command and is volatile.
A command can only affect a single stream, because the operation appends to a single stream. To guarantee the consistency of information, the system should not append events to two streams in one request. One append might fail, leaving the system in an inconsistent state. This rule increases the importance of the design of the schema of an event store.

The Schema
An event store contains no schema for the specific structure of events. The data schema is not explicitly defined at all, but is implicitly encoded in the ESS. The knowledge of the data schema inside an ESS is encoded in the source code of the , and functions. This is similar to other systems with a so-called implicit schema (Fowler, 2013), such as document oriented data storage systems.
In general, events can take any form and thus the schema as well, therefore, we left these definitions abstract on purpose. However, we believe that these abstract definitions can be used to support the discussion of schema evolution, as we show in Section 7. This section defines event, event stream, and event store schemas, along with the conforms relation.

Event Schema. An event schema describes the type and form of events. conforms( , ) holds if event conforms to the specification .
An event schema could be implemented by for instance XML Schemas, or AVRO (The Apache Software Foundation, 2019). The latter uses the schema not only for validation, but also for serialization to a binary format. Two other options that can be applied to create a more formal event schema are domain specific languages (suggested by E11 and E14) and strongly typed classes (see Table 5).

Event Stream Schema. An event stream schema describes an event stream and the events that can occur in the stream.
The event stream schema contains the event schemas of the events that can occur in the stream, along with the patterns of occurrence. conforms( , ) holds if event stream conforms to the specification .
An event stream schema contains both the specification of the events, and the specific patterns. An example schema contains both the schema (or specification) of the 'registered' event, and the fact that the 'registered' event occurs before a 'checkout' event.
Event Store Schema. An event store schema describes an event store and the streams that are stored in the event store. conforms( , ) holds if event store conforms to the specification .
The event store schema contains more knowledge than only the event stream schemas, similar to the event stream schema. For instance the cohesion between streams, such as the fact that when a specific stream contains a certain event another stream should exist is also present in the event schema, can also be specified in the event store schema. An explicit implementation of event stream schemas or event store schemas was not encountered during the interviews.

Event Sourced Systems based on CQRS
As we have seen in Section 4, every ESS under study also applies CQRS. CQRS was introduced by Young (2010) and Dahan (2009), and the goal of this pattern is to separate actions that change data (those are called commands) from requests that ask for data (called queries). Although event sourcing and CQRS can be used separately, the common application of the two patterns is worth exploring. Based on literature and the interviews an example architecture combining event sourcing with CQRS is discussed. This architecture is shown in Figure 1. As illustrated, the event store schema is part of the ESS: the event store conforms to it, and the command and query system encode it in their application logic.
In the command system (as introduced by Evans (2003)) are used to process incoming commands (1). Commands are routed by the ℎ to the correct aggregate. Aggregates will process the commands using the and operations. First the existing events are read (2), a projection is built (3), and then the function will be called. When the command is accepted, the resulting event will be appended to the event stream (4).
An aggregate reads a specific event stream, to which the new event is also appended. Often the aggregate will be the owner of the event stream it reads and appends to. As a benefit, commands sent to different aggregates can be  Figure 1: An event sourced system based on CQRS. The event store conforms to the schema , which is encoded by the command and query systems. The command system validates commands using the events. These same events are read by the query system to build the projections, which are used to respond to queries.
processed concurrently without interfering. E6 describes a solution were multiple aggregates use the same stream. This variation is used to share generic behaviour among aggregates, it is mixed with more specific logic.
In the query system, are used to build that can be used to return information to the sender. Queries are routed by the ℎ to the correct projector (5), depending on the specific purpose of the projector (such as browsing or searching). The projector will retrieve the requested information from its projection. First the events from the event streams will be read (6), then the projection will be built (7).
While queries can be handled by building the projection on-demand, most ESSs based on CQRS will update the projection as soon as new events are appended. In that scenario, step (6) and (7) will be executed before (5), and the projector can immediately use the projection to handle the query. This decision is based on the ratio between events and queries. When there are few queries, and many events, pre-building the projection takes up resources (such as storage). If the workload consists of more queries, building the projection ahead of time results in faster response times. E24 describes a flexible approach that merges the two approaches in an on-demand fashion. The sequence numbers of events are used as checkpoints and allow the projectors to track which events are already processed. Immutability of the event store is crucial for these projectors, if events or their ordering are mutated, the checkpoint has no value and the projector needs to re-read the event streams and rebuilt the projection.
Most pre-built projectors are eventually consistent. As Vogels (2009) explains, the ESS guarantees that if no new commands are processed, eventually all queries will return the last updated value. However, because there is time between the acceptance of a command and the updating of a projection, a query might return an older value. The duration between (4) and (7) is the so-called inconsistency window: the command system and the query system do not share a consistent state. Eventual consistency was also listed as one of the challenges in ESSs and is discussed in Section 6.
Four engineers explain how there projectors share a database transaction with the aggregates. This allows them to achieve immediate consistency, because both the event as the projections are committed in a single projection. In those systems scalability is sacrificed for immediate consistency. This implementation technique results in synchronous projections. Table 5 summarizes the different concepts and codes that were extracted from the interviews. While the definitions are mainly based on our experience in building an ESS, we have used the data extracted from the interviews to scope our description. The concepts and codes discussed by the engineers determined what specifics were described.

Challenges Faced in Applying Event Sourcing
A pattern description without discussing the consequences is incomplete, and would lead engineers astray. While Section 4 discusses the positive consequences that engineers experienced, they also discussed the negatives in the interviews. In this Section we discus five challenges experienced by the engineers with two goals in mind: (1) to indicate to practitioners what the limitations of the pattern are and (2) to formulate novel research topics for future research around the pattern. The first two challenges are addressed in more detail by two of our contributions in Section 5 and 7. The summary of mentioned challenges by engineers is listed in Table 6. How can Engineers better be Supported in Learning how to Apply the Event Sourcing Pattern? -The most prominent category of challenges mentioned by the engineers is in the area of designing software. Designing ESSs is more diffi- cult than other systems, because of two characteristics. In the experience of thirteen of the 25 engineers, thinking in events and state transfers is completely different from thinking in current state and database transactions. Section 5 proposes a description that improves the understanding, and support the teaching of event sourcing and event sourced systems (ESSs). However, an ESS introduces not only events and state transfers. Eventual consistency forces developers to let go of guarantees that they would have in a system using current state and synchronous processing. In a CQRS system, an update sent through a command will not immediately be reflected in the result of a query. The system first needs to process the event into one or more projections. Engineer E12 states that "a lot of developers had to get used to information not being in place", and E2 adds that "getting people to understand eventual consistency is the biggest hurdle." Eventual consistency forces developers to rethink the basic interactions of the user with the system.
We give two examples of interactions that force developers to rethink system design. The first example is that of the expectation of users to retrieve data that they previously submitted into the system. However, in a CQRS system, the query system might not directly return the data that was submitted through a command. The user interface of the system should make it clear to the user what is going on, or even try to hide the fact that the system is eventual consistent. The second example is that of developers that more or less have the same expectation. Often developers try to use the result of the query to make decisions in an aggregate. However, the query system might not have processed all events and misses recent updates. If developers overlook this principle, the decisions lead to bugs in the system. How can Event Stores be Evolved? -Both E13, "we dreaded the upgrading, we had some fear in advance", and E22, "versioning in event sourced systems is a big problem", point out the perceived difficulty of upgrading ESSs. This challenge did not come as a suprise, our earlier work (Overeem et al., 2017) and the work of Young (2017) underline this. During the interviews we identified five fundamental techniques for schema evolution in ESSs, which are described in Section 7. How can Tools, Frameworks, and Platforms be Provided to Make the Pattern even More Successful? -Eight engineers discuss the lack of standardized tools, such as frameworks, platforms, and databases. A commonly stated opinion within the community is that you do not need frameworks to implement an ESS. However, engineers E9, E10, E17, E20, E21 and E25 state that they wish to see more mature libraries and frameworks. Engineers E6, E17, E19, and E22 mention that infrastructure and tooling for ESSs is immature. Either the tooling does not support a broad enough set of scenarios, or the quality is lacking. How large the market is for specialized event sourcing tools is difficult to say. Recently AxonIQ (2019) has started to offer commercial support for ESSs, similar to what Event Store (2019) does. How can Projections be Optimized? -Projections, as discussed in Section 5, are used to retrieve information from the system. Rebuilding projections, however, can become a bottleneck for ESSs.
Engineers E11, E13, and E14 discuss their search for improved implementations of projectors. Quick improvements can often be found in faster database technology, or better use of hardware. Although rebuilding projections needs planning, engineer E14 discusses how they rather plan the rebuilding in weekends, instead of investing developer effort for optimization.
Engineer E16 explains how the domain can show an optimization: not reading all the events on a rebuild. Often the older events are no longer reflected in the projection, because the specific data (such as a classified advertisement) is no longer active.
Another important implementation detail that lifts some of the burden is that projectors can (and must) be implemented as independent, autonomous processes. This gives the system the possibility to only rebuilt those projections that are required to, instead of all the projections at once.

How can a System that Uses Event Sourcing Protect User
Privacy? -Privacy regulations, such as the GDPR, are designed to protect users from being taken advantage of. Personal information should not be kept in a system for all eternity, but the system should delete it whenever someone requests that. However, such a requirement conflicts with the nature of event sourcing: retaining all the data. Engineers E20, E21, E23, E25 mention that they designed their systems to comply with these regulations. Systems HealthSys and P-PaySys use some form of anonymization and removal of information to comply. Obviously, this requires them to rewrite events. System IdentitySys takes a completely different approach. The system separates the events and the personal information in two different stores. When events are read, they are supplemented with the personal information. Immature tools (E2, E9, E10, E17, E20, E21, E25); Frameworks not properly maintained (E6, E19, E22); Pattern versus framework (E6, E24); Tools not accepted by operations (E14); Frameworks hide details from developers (E24); Frameworks help beginners (E6, E7, E13, E14, E17, E24) How can Projections be Optimized?
See Table 7 If that information is no longer present (because of removal requests), default values are supplied.

Schema Evolution in Event Sourced Systems
A challenge discussed by multiple engineers is evolution of event sourced systems (ESSs) (as stated in Section 6). From the transcripts, we identified five fundamental techniques for schema evolution. These event schema evolution (ESE) techniques are described using the definitions given in Section 5.
We encountered two reasons why event schema evolution in ESSs is difficult. First of all, the implicit schema (as described by Fowler (2013)) makes evolution in ESSs difficult. Solutions as proposed by Meurice et al. (2016) and Maule et al. (2008) to analyze the impact of schema changes are not usable, because there is no explicit schema. In contrast to their solution, the change originates in the application and impacts the data in the event store. This makes the direction of the impact different than theirs.
The second difficulty in event schema evolution, is the immutability of the event store. Traditional solutions to transform or rewrite the store are not always possible. However, the benefits of immutability in event stores (as listed in Section 4) are not always requirements. The different degrees of immutability, as shown in Table 2, allows for different evolution techniques.
Teams that apply event sourcing without a clear understanding of the business domain introduce risk, according to E14, E16, and E22. E22 explains that the challenge of evolution is exactly why it is preferred to always start a new system without event sourcing, and only introduce event sourcing when the domain knowledge is stable: "once we have enough trust in our model we will transform to event sourcing." As E16 confirms, events based on a sufficiently clear domain knowledge will decrease schema evolution.
Another prevention technique is the cleaning up of events in the event store, of which we encountered two possibilities. First of all, older events that no longer represent active information can be moved into cold storage. These events can still be read and processed, but are no longer processed by the ESS itself. Therefore, they do not have to conform to the implicit schema of the ESS. Second, sometimes these events can be kept in the event store itself, but the ESS will never read them. Again, this makes it possible to ignore those events on upgrades.
Event schema evolution that can not be prevented can be solved by the following five evolution techniques. Although in our work (Overeem et al., 2017) we also discuss five techniques, during the interviews a different set of techniques was encountered. The technique lazy transformations was not mentioned by any of the engineers, while weak schema was mentioned as a new technique. Which techniques are used by which engineers, and the benefits and liabilities per technique given by the engineers during the interviews are classified in Table 7. In some cases the liabilities are also from engineers that do not apply the particular technique: they stated the liability as a reason for not using the technique. ESE Technique 1: Versioned Events -Given an event store conforming to a schema , the technique versioned events transforms the schema into ′ such that conforms( , ′ ) ∧ ∀ ∈ ∶ ∃ ′ ∈ ′ ∶ ⊆ ′ This techniques introduces only new types of events, and does it in such a way that the event store conforms to ′ without transformation. The functions that process the involved streams are required to handle these new events. FINDINGS This technique is applied by engineers E7 and E19, with the sole benefit that it is a simple technique that  Daigneau (2011) as the tolerant reader pattern. Serialization formats such as Protobuf by Google Inc. (2019) and AVRO by The Apache Software Foundation (2019) support this technique by reading the existing binary data into the new version of the objects. FINDINGS Eleven engineers apply this technique, because of the simplicity. The limitations of this technique are stated as a liability, together with the pollution of the operation that is required (E9 explains: "you want to assume a certain event schema"). ESE Technique 3: Upcasting -This technique is well known to event sourcing practitioners and described by Betts et al. (2013). The event streams are transformed into streams conforming to the latest schema by a new function: the function. This function is called before the streams are passed into existing functions. The transformation is centralized in this new function, which improves the maintainability of the system.
For the functions it appears that little has changed, it appears that the relation conforms( , ′ ) holds. However, events already stored in still conforms to , while newly appended events conform to ′ . After appending new events to , the store itself will neither conform to or ′ .
RELATED WORK The technique is similar to message translators as described by Hohpe and Woolf (2004). FINDINGS Twelve engineers use upcasters, claiming ben-efits as no domain pollution, immutability of events, and simplicity of implementation. One of the stated liabilities is an decrease in performance: "If you have been running upcasters for a long time, you will have quite a stack of them in place, which slows down the entire loading." Other liabilities are added complexity in analyzing the event store, because it contains events that conform to different schemas. ESE Technique 4: In-Place Transformation -This technique updates events to resemble the new schema, and thus forces ESSs to forgo of immutability. New operations that alter event streams need to be introduced, such as (insert an event at a specific position) and (update the event at a specific position). These operations break the immutability of the event store, with the consequence that cached projection need to be rebuilt. Therefore, two available event stores, EventStore Event Store (2019) and AxonDB AxonIQ (2019), deliberately do not offer these operations.
RELATED WORK This technique is similar to migration scripts for relational databases. Scherzinger et al. (2013) and Saur et al. (2016) both propose a similar approach to evolve data in a NoSQL store. The lazy migration (on data access) is similar to incremental migration as described by Sadalage and Fowler (2012). FINDINGS Four systems, HealthSys, PaymentSys, Ap-proveSys, and Advert1Sys, apply this technique. Benefits are the possibility of ad-hoc fixes, and improved reasoning because the store will only contain events conforming to a single schema. However, the risk of making errors, the loss of immutability, and the performance are stated as liabilities. E22 explicitly prevented this technique from being used: "to prevent this technique we first zipped the events, and then encoded the result before storing them." ESE Technique 5: Copy-And-Transform -During the execution of this technique, existing streams are processed and new streams are created from transformed events that conform to the new schema. This does not violate the immutability of the source events, but creates new events instead. Existing projections are still valid, although they do need to process new streams to receive new events. RELATED WORK Young (2017) describes this technique as copy and replace. The parallel universe of IMAGO, as described by Dumitraş and Narasimhan (2009), is similar to this technique. QuantumDB, created by de Jong and van Deursen (2015), uses ghost tables to apply this technique in relational databases. Copy-and-transform of a complete event store could be seen as an ETL process that creates a new store. FINDINGS Fourteen engineers have used this technique, either to transform specific streams or a complete event store. As E6 states, this technique is relatively simple to implement, because "we can do literally anything we want." The data preservation is stated as a benefit, as well as the fact that this is a one-time operation. The performance of this operation is a liability, transforming a large store takes a considerable amount of time.
The data discussed in Table 7 does not allow us to discuss how techniques are combined within a single system. It does allow us to discuss how engineers have experienced and applied different techniques over the course of working on multiple systems. We can observe the following from the discussed engineering experiences: • No engineer has solely applied versioned events or inplace transformation, those techniques are clearly used in combination with others.
• Five engineers have solely applied upcasters, which corresponds with the general advice we found in the grey literature and community.
• The copy-transform technique is mostly used in combination with other techniques, only two out of the fourteen engineers have solely applied this technique.
• Four engineers have considered techniques, but opted to not apply them: E9 considered versioned events and weak schema, E16 considered versioned events and copy-transform, E22 considered in-place transformation, and E24 considered copy-transform and in-place transformation.
We conclude that the techniques are not exclusive: almost all engineers have used multiple techniques and applied multiple techniques in a single system. Example combinations mentioned in the interview are • The application of upcasters, with copy-transform to clean up the upcasters when there are to many.
• The application of in-place transformation for quick patches, while a different technique is used for planned evolution.
• The application of weak schema for simple evolution steps, while a different technique is used for more complex evolution.
From the study we formulate the following advice: 1. Versioned events and weak schema are the simplest techniques to implement. Systems should start out with those techniques. 2. When evolution operations can not be handled by the first two techniques, systems can apply upcasting. This retains the immutability of the event store. 3. Only when a decrease of performance or maintainability is experienced should systems apply copy-andtransform. 4. In-place transformation should only be used by those systems that do not require immutability or an audit log.
The techniques form a range of possibilities to evolve the event store of an ESS. All techniques with one exception, inplace transformation, can be applied in an ESS that follows the definition given in Section 5.

Discussion
One could wonder whether another research approach would have been equally successful in extracting architecture knowledge about the event sourcing pattern. We have looked at open source systems such as Axon Framework (2019), Event Store (2019), NEventStore Dev team (2019), Prooph Components (2019), and observed that these follow the pattern and guidelines as discussed in this article. However, aspects such as the rationale and consequences of using the pattern are impossible to extract this way. This research is also similar to a study with multiple cases (Flyvbjerg (2006)), although one would expect a more extensive extraction of information about the case (i.e., system) and its context in a multiple case study. We would have had to use more research resources, but perhaps we would have also been able to provide more code examples of how the pattern was implemented. Finally, design research (Sein et al. (2011)) could have also been used to extract the pattern description. While the description would perhaps have been less extensive, there would have been more focus on the evaluation and validation of the pattern and its description. We consider this last aspect as future work, even though we are convinced that the incremental nature of this research has led to a pattern description that is reusable and useful for architects.
Our pattern description itself does not follow a specific format. We decided to structure our presentation according to the concepts emerged from the GT, and not according to a specific pattern description format. We did, however, use the examples of Gamma et al. (1995) to evaluate the completeness of our pattern description.
Gamma et al. states three essential elements besides the the pattern name: the problem, the solution, and the consequences. The problem describes what the context is of the pattern, and when to apply it, which we have summarized in Section 4. The description of the pattern, the solution, is covered in Section 5. Finally, the consequences, are split over two sections: Section 4 covers the positive consequences by linking them to the problems that are solved. Section 6 covers the negative consequences by stating several research challenges for future work.
The format that Gamma et al. use to describe patterns consists of thirteen different sections. While these sections cover the four essential elements, the related pattern section should be discussed on its own. The design of a software system is never the application of a single pattern, but rather the combination of different patterns that together form the design. This is not different in ESSs. Section 5 recognizes this, and explains the combination of event sourcing in CQRS in great detail. The relation to other patterns to solve the specific challenges of schema evolution are covered in Section 7.
A second question that must be asked is whether academic fora are the optimal place to publish patterns. As whole books have been written about particular patterns and as patterns appear to have a certain shelf life, one could wonder whether patterns should be published in academia at all. We argue, with this article, that some patterns are too important to ignore (SOA, Client-Server, Event Sourcing, etc.) and that these deserve specific detailed attention from academics. We find the strongest proof for this in the provided research challenges (Section 6) and in the challenge discussion about evolving event sourced systems (Section 7).
The number of interviews does not allow use to generalize over the results. It is not possible to prove that, because 14 engineers use the technique weak schema it is the recommended technique. However, practitioners can integrate the reported experience into their decision making. They can weight the context of the interviewed engineers, and match that with their own context. Although our research does not result in hard recommendations, we believe that practitioners can benefit from the reported experiences.

Threats to Validity
Both Golfasni (2003) and Onwuegbuzie and Leech (2007) discuss the challenges of assessing validity in qualitative research. We identify several biases for both internal and external validity. First, we regard the objects of study, i.e., the engineers and their uses of and experience with the pattern. The contributions of our research are based on the 25 interviews that were conducted. The engineers were not hand selected, but volunteered. Therefore, it is possible that we only interviewed a particular subset of practitioners, who are willing and able to discuss the pattern at length. It is for instance remarkable that they all combine CQRS with event sourcing. Table 1 shows a diverse variety of experiences, and Table 2 shows an equally diverse variety of systems. We have interviewed consultants (E14 and E16), and full-time employees, with a wide range of years of experience. From small systems to multi-million user systems, the interviewed engineers have been exposed to all. These characteristics indicate a broad range of opinions and experiences. Within the group of 25 engineers, 16 engineers have three years or less of experience working on ESSs. This could be due to the relative novelty of the pattern. However, these engineers were full-time involved in the development of the ESS. The exploratory questions (Appendix A) focus on topics that can sufficiently be answered by engineers with one or two years of experience.
Internal validity, which is strengthened by the way in which the research is conducted, has been defended in several ways. First, an interview and analysis protocol (Appendix A) has been applied to each interview. The interview protocol was created from extensive literature study and discussion in the research team, in which two members have no experience with the pattern itself, thereby reducing bias. The first two authors have extensive experience in developing a large ESS. This experience has lead to many interactions with practitioners in gatherings, conferences, and on-line. These interactions have served as an informal triangulation that support the findings presented in this article.
As a constructivist GT approach (Charmaz, 1996) was followed, we conducted relatively open interviews. The exploratory nature of the interviews enabled interviewees to comment on all aspects of the subject under study, independent of the experience of the engineer with the pattern. Many engineers work on closed source, commercial systems, which makes it hard to use documentation or source code in the research. Every interview was closed with the question if anything important was left unasked, and if they knew other engineers that we should interview. Often the engineers came with stories and anecdotes that amplified the discussed topics. The engineers that were referred us to were all invited to cooperate.
External validity, i.e. generalizability to other cases, can be defended by the multitudes of systems that the engineers have observed and worked on.
As already discussed in Section 2 we do not claim to have reached saturation. Not reaching saturation could leave us open to missing crucial information, or even using incorrect information. Seven of the interviewed engineers have five or more years of experience, and we did not find conflicts between their statements and the other interviews. Together with the experience of the first two authors in developing ESSs, we believe that our findings are supported by the data.
We have not covered all niches in the software world, so we can not generalize to all types of systems. However, we do believe that in the domain of business information systems, we have sufficient coverage to claim generalizability to other systems in this domain. Furthermore, while we do not claim generalizability to other domains, we do believe that those domains can be inspired by our findings in designing event sourced systems. Also, the common occurrence of all event sourcing evolution techniques in Table 7, illustrates that we observed a broad cross section of systems in use. Finally, the use of GT has provided us with a reliable manner of extracting concepts and definitions from the interviews. While this study's findings can be generalized to describe event sourced patterns, the research work is not finished.

Conclusion
In this article we present a conceptualization of the event sourcing pattern, grounded in interviews with 25 event sourcing engineers. Event sourcing is a pattern that solves the three problems that modern systems face. The flexibility that the combination of event sourcing and CQRS gives decreases the complexity in large systems. The decrease of complexity enables the development of larger systems that remain maintainable. The reliability of the system improves when every state change is stored in a durable store. It allows engineers to undo state changes that were incorrect, or replay those state changes after system failures. An improved reliability is essential for systems that provide increasingly critical processes. Finally, systems that serve increasing numbers of end-users benefit of the improved scalability that ESSs systems provide.
These benefits give enough reason to incorporate event sourcing in modern systems. This article presents a thorough description of the pattern, including the context in which it is applied and the consequences that are encountered. The description itself is grounded in the experience of 25 engineers, making it a reliable source for both new practitioners and scientists. We answer the following four research questions in this work.

What types of systems apply event sourcing, and why?
The overview of 19 systems, given in Section 4 and especially in Tables 2 and 4, show that event sourcing can be applied in systems of any size: both smaller and larger systems benefit from the pattern. We studied systems with thousands of events up to and including systems with billions of events, and all of these systems have benefited from event sourcing, according to their engineers. As E14 states "I have never seen an event sourced system that was rewritten to a system with traditional current state storage." The event sourcing pattern is not tied to a specific type of application, but is applied in many different domains, such as marketing, micro-lending, content management and classified advertising. The systems under study show a strong relation to DDD as a software development approach. This is partially explained by the fact that event sourcing and CQRS were invented in the community that grew around DDD. The microservice architectural style has a weaker relation (8 out of 19 systems apply it), while CQRS is used in all these systems. We identify four reasons for event sourcing: audit, flexibility, complexity, and trending. While a common characteristic of event sourcing is the immutability of the events, we show that there are three levels of immutability that can be found in ESSs. The characteristics summarized in 2 substantiate that event sourcing can be applied in a diversity of domains, and technologies.
How can event sourced systems be defined? Section 5 gives definitions of the different concepts in event sourcing and event sourced systems. These definitions are based on our five years of experience in building an ESS, and they are augmented with the interviews. The experiences of the interviewed engineers add nuance and variation options to the different concepts, making them reflect the view of practitioners. Concepts and codes extracted from the interviews scoped our definition: the engineers provided us the topics to define through the interviews.
How can event sourced data structures be evolved? Five event schema evolution techniques are discussed in Section 7: versioned events, weak schema, upcasters, in-place transformation, and copy-transform. For every technique the benefits and liabilities as discussed with the interviewed engineers, as summarized in Table 7. Almost all engineers have experience with multiple techniques, often combining them in a single system. As all techniques have their benefits and their liabilities we did not found a single technique that would be applicable in all scenarios. We conclude the section with general advice on when to apply specific techniques, and how to combine the techniques.
What are the challenges faced in applying applying event sourcing? Five challenges that the interviewed engineers experienced are discussed in Section 6 and summarized in Table 6. We address the steep learning curve in Section 5 by giving definitions and operations that can be used in discussing and teaching of ESSs. Evolution is discussed in detail in Section 7, again using the concepts and operations to explain and characterize the different techniques. The other three challenges, lack of technology, rebuilding projections, and privacy, are presented as a start for a research roadmap. We call for researchers to further explore these challenges.
The main scientific contributions are found in Sections 2 and 6. In the research approach, we aim to inspire future architecture researchers to use similar qualitative techniques, such as GT, for the explication of architecture knowledge from practitioners. Secondly, a set of research challenges is provided for software engineering researchers to challenge the knowledge around event sourcing in large software systems. Additionally, we are excited to define and document such an important software pattern for the scientific community.

A. Interview Protocol
Context related questions