When one’s app is challenged with poor performances, it’s easy to set up a cache in front of one’s SQL database. It doesn’t fix the root cause (e.g. bad schema design, bad SQL query, etc.) but it gets the job done. If the app is the only component that writes to the underlying database, it’s a no-brainer to update the cache accordingly, so the cache is always up-to-date with the data in the database. Things start to go sour when the app is not the only component writing to the DB. Among other sources of writes, there are batches, other apps (shared databases exist unfortunately), etc. One might think about a couple of ways to keep data in sync i.e. polling the DB every now and then, DB triggers, etc. Unfortunately, they all have issues that make them unreliable and/or fragile. You might have read about Change-Data-Capture before. It’s been described by Martin Kleppmann as turning the database inside out: it means the DB can send change events (SELECT, DELETE and UPDATE) that one can register to. Just opposite to Event Sourcing that aggregates events to produce state, CDC is about getting events out of states. Once CDC is implemented, one can subscribe to its events and update the cache accordingly. However, CDC is quite in its early stage, and implementations are quite specific. In this talk, I’ll describe an easy-to-setup architecture that leverages CDC to have an evergreen cache.
Software architecture is a soft term, apparently able to mean whatever the speaker intends it to mean at will, with little to no argument from the audience, industry, or other architects. One would think that by this time in our industry’s lifecycle, we’d have nailed some of this down by now, so let’s do that — let’s nail down what software architecture looks like in the modern software development age, including nods to agile practices, continuous-everything and DevOps, polyglot programmers and maybe even a kitten.
In our audits we identify major challenges for the future-proofness of software systems. Our customers range from public administration to DAX companies, startups and the church. In this talk we share our experiences from analyzing more than 100 systems during the last ten years. The challenges include inscrutable code organization, vintage technology stacks, architectures without modularization and customer variants created by copy and paste.
What these challenges have in common is that they do not simply drop out of a tool. Instead, each challenge is preceded by collecting evidence, tying together manual inspection and automated analysis, applying appropriate visualizations and making the right conclusions. Hence, we do not only share the final results, but focus on the methodology and individual steps we made to identify the challenges. We hope this will be inspiration for other architects to identify challenges in their own systems.
Software systems often live for many years or even decades, are carefully maintained and patched again and again. But at some point the UI looks dusty, changes take forever and you want to benefit from the possibilities of modern technologies. The decision to modernize the system is made. And then comes the simplest requirement in the world, which we’ve all heard before: ‘But the new system must be able to do the same things as the old one!’ It is not surprising that we hear this requirement so often: it is simple, it can be formulated even if you know the system only superficially, and it seems to be precise. In fact, this requirement is quite nonsensical. In this talk, we will explain why the simplest requirement in the world is nonsense and provide experience, guidance, and best practices for modernization projects where the goal is to design the target state and plan the path of modernization.
We see a lot of confusion regarding architectural work these days. When? How much? Who? Tons of heated debates and nobody asking the essential question: Why? But without asking Why, all the other questions are futile. Thus, we will start this session by asking: *Why* do we need architectural work? Which problem(s) does it address? After finding some surprising answers, we will move on and ask ourselves: *How* can we do it best? What are the key activities of architectural work? Finally, we will ask ourselves: *When* should we do *what* and *how much*, depending on the given context? After this session, we will have created a lot clearer picture what architectural work actually is about – without the usual fuzz, stripped down to its pure essence.
Naming is hard. Teams who write software often give rather technological names to the components they write. This is a problem because it makes the system less understandable and more difficult to extend. I propose to show more respect to the language that the users and other domain experts speak because it will allow us to think as they think which will make our software more compatible with the users’ mental models. Let’s call the modules, components, interfaces, services and entities as business-friendly and technology-neutral as possible and use some patterns from Domain-Driven Design to achieve an architecture that is well-structured, easy to explain and maintain over the years. It’s a matter of attitude and attention, folks!
One of the objectives of software architecture is the understanding and communicating complexity. We have long recognized that the most effective way to communicate complexity is via human language. But language poses a challenge when working in a team comprised of members of different (sub)cultures and nationalities, each with a native language that might not be the same as the language we are communicating in. Each of these cultures has a different perception about how to communicate effectively. By way of example, in some cultures, it is considered appropriate and respectable to use the tentative voice “perhaps we should consider trying X”. Whereas in other cultures it is the assertive voice that is valued “This is how we should do it”. Assuming that everybody in the room wants to communicate effectively, what aspects can we define that impact our design? What organizational culture fits better with what type of architecture (microservices, monolith)? And what cultural needs must these architectures and boundaries address to succeed? Join us in this interactive talk where we together explore these challenges!
The talk discusses, if event-driven architectures can be used in an advisory software as a form of reactive architectures. To do so, business requirements as well as technical implementations are covered. Why was the event-driven approach been selected? Requirements either business or technical are discussed, which drives the selection of an event-driven architecture. The event-driven approach is the most suitable to support decoupling of systems and to give the end user a comprehensive view about the activities along the advisory process. 1. Introduction of rough requirements to the advisory software 2. Introduction of the advisory process 3. Variant as controlling process (classical architecture) 4. Variant as event-driven architecture (reactive architecture) 5. Advantages and disadvantages of the selected approach
6. Take aways
When was the last time you took a day off? Are you going to the office every weekday? How often do you work with people? All these questions usually show one thing – we all have stress that comes from our work. Famous Work-Life balance is often non-existent in many organizations, and developers feel trapped in their daily routine to “deliver a business value” to their employer. With stress, your creativity shrinks and innovative approach dies in a busy work you do.
What if I was to say it does not have to be that way? In this talk, we will discuss ways how to relax and avoid “Stress Driven Development”. We will look at the problem from a perspective of an individual contributor, a technical lead, and a manager. As a result, the audience will be able to take away best practices for tackling stress and help others in their organizations to become more productive and simply happier individuals.
IT is characterized by innovations and rapid changes. Sustainability, and with it the interests of the next generation, has not been as much of a focal point so far. However, in order to achieve the ambitious climate targets, the IT sector, as an important driver of digitization, has to make its contribution as well. An intelligent use of resources to avoid waste and litter is a start here. The experience gained in economical energy consumption on a large scale in the cloud, but also on a small scale in the embedded sector, can also be useful for companies. I present some aspects that have received too little attention as of yet and how one can improve one’s climate footprint by switching to a “green cloud” and other processor architectures. Thereby, the competing interests of economy and ecology can be better combined.
Software Architecture is about the important things, where “important” means high-risk and hard-to-change decisions. DevOps tries to develop a culture where constant experimentation and learning takes place while the environment changes rapidly. How can this fit together? In this talk I will talk about general strategies agile teams can use to build and foster a DevOps culture while at the same time ensuring high-quality and sustainable software delivery. These strategies will be illustrated with real-world examples from different domains and environments.
What is the internal structure of your cloud-native Java microservice? How do you organize your SPA / PWA frontend? Is there a relation between the frontend and backend design? This session will introduce a consistent, feature-driven, standards-based structure called BCE and apply it to microservices and SPAs. I will create and review a lot of code and present the ideas on a few slides. Your questions are highly appreciated.
Natural language processing (NLP) has made incredible strides in the past few months. With these new possibilities and more and more textual data, we will see an increasing demand for NLP-based development. We have built several NLP-based systems in productive quality over the past eight years. It didn’t always go completely smoothly and the product didn’t always deliver what we wanted to achieve in the end. In this lecture we would like to report on our experiences: 1. What are the use cases of text analytics? 2. Which specific information can be extracted from text using NLP libraries? 3. What are the limits of text analytics? This talk should encourage to deal with NLP but at the same time slow down the current euphoria and expectations.
Today, end users often expect subsecond response time and 100% uptime, often for applications dealing with terabytes of data. Reactive systems can address these requirements, as they are more flexible, loosely coupled, and scalable, making them easier to develop and amenable to change. They are also significantly more tolerant of failure, and when failure does occur, they meet it with elegance rather than disaster. Reactive systems are highly responsive, giving users effective interactive feedback. In this session, you learn how users adopt reactive patterns for their high-performance applications and have a look at typical, well-architected implementations on AWS.
During my career in IT and people development I had several turning points where I either was made to use journaling techniques or experimented with them myself to successfully tackle the next challenge. Over the years I reflected why those ‘written self-reflection’ techniques are so powerful and – at the same time – they are still quite rarely used in the business context. In this workshop I will happily share my experience and my findings backend by a scientific psychological model with you! You want to leverage your resources?
You want to change habits in your life’s “departments”? You want to harvest outstanding outcomes – at work and beyond? YES? Then join us to get ready for ACTion and be inspired how to leverage journaling techniques – at work & beyond. We’ll even use our hands, hearts and minds to directly try out some of them!
Cloud storage footprint is in exabytes and exponentially growing and companies pay billions of dollars to store and retrieve data. In this talk, we will cover some of the space and time optimizations, which have historically been applied to on-premise file storage, and how they would be applied to objects stored in Cloud. Deduplication and compression are techniques that have been traditionally used to reduce the amount of storage used by applications. Data encryption is table stakes for any remote storage offering and today, we have client-side and server-side encryption support by Cloudproviders. Combining compression, encryption, and deduplication for object stores in Cloud is challenging due to the nature of overwrites and versioning, but the right strategy can save millions for an organization. We will cover some strategies for employing these techniques depending on whether an organization prefers client side or server side encryption, and discuss online and offline deduplication of objects. Companies such as Box, and Netflix, employ a subset of these techniques to reduce their cloud footprint and provide agility in their cloud operations.
In development and implementation of AI-based systems , the main challenge is not to develop the best models/algorithms, but to provide support for the entire lifecycle – from a business idea, through collection and management of data, software development managing both data and code, product deployment and operation, and to its evolution. There is a clear need for specific support of Software Architecture for AI. In this talk we will show the different aspects of a Software Architect has to master to integrate AI-based technologies, like for instance patterns for AI-based solutions, mastering a new development approach or handling requirements and safety issues with AI-based systems.
Our hope is that anyone looking to embark on a legacy modernisation programme or who is currently involved with one will find some useful advice here. We have spent most of the last couple of decades helping large organizations overhaul their legacy systems. In doing this we’ve learned a great deal about what works and seen many paths that lead to failure. In this talk we describe several of the legacy supplanation patterns that we found to be successful as well as some of the “anti-patterns” that more often than not lead to failure. For each pattern, we describe a particular approach, the context where it’s effective and explain how and why you might use them, giving real world examples along the way. Key to our approach is seeing legacy replacement as a holistic activity that cuts across technology, business processes and organisation structure. In more detail using these patterns often means discovering how one large technical solution meets multiple business needs and then seeing if it is possible to extract individual needs for independent delivery using a new solution. We describe how different elements of current solutions might be mapped to business capabilities and, using examples, how the various patterns can then be used to incrementally deliver these replacement solutions over time. A common objection is that finding these “seams” in existing systems is too difficult. While we agree it is challenging at first, we have found it to be a better approach than the alternatives which all too often result in Feature Parity and Big Bang releases. We describe these anti-patterns as well as some of the underlying organisational reasons many legacy replacement programmes fail. This talk is drawn from material being produced in collaboration with Martin Fowler and James Lewis which will be published in the coming months on Martin’s site.
Recent research summarised in the book Accelerate points to a set of practices that lead to high software development organisation performance. Simultaneously, research from the Santa Fe institute on Complex Adaptive Systems over the last 20 years seems to point to a grand unified theory of organisational design. So have we cracked it? Do we now have the answer to the question: how do we create and scale high performing software and organisations? In this talk, James explores the relationships between team structure, software architecture and the emergent phenomenon of complexity science.
Software metrics can be used effectively to judge the maintainability and architectural quality of a code base. Even more importantly, they can be used as canaries in a coal mine to warn early about dangerous accumulations of architectural and technical debt. I will introduces some key metrics that every architect should know (e.g., average component dependency, propagation cost, structural debt index, and more). Then I will talk about the journey to create a metric to measure maintainability and introduces a new metric maintainability level. This metric is promising because its value usually matches quite well the gut feeling of developers about the maintainability of their software systems. Therefore, it can be used to monitor code maintainability and as an early warning indicator if things move in the wrong direction.
Micro Frontends are not the only solution and – unsurprisingly – not always suitable. Hence, in this session, I present an alternative frontend architecture that we have successfully used in numerous enterprise projects in recent years: the Frontend Modulith. We discuss dividing your applications into less complex parts, the mapping of your business domains, the categorization of libraries, and access restrictions for enforcing your intended frontend architecture. You will also see how you can drastically improve the performance of your CI process with incremental builds and tests as well as local and distributed build caches. The examples use Angular and Nx, a tool that comes from former Google employees and supports the development of structured enterprise applications with frameworks like Angular or React. In the end, you know whether Frontend Moduliths are the right approach for you and how you can use them to build sustainable frontends for your enterprise solutions.
These days, many teams favor loose coupling, isolation and autonomy of services and therefore typically opt for event-driven and reactive architectures, using a communication pattern known as choreography. While choreography is beneficial in some situations, it is far from the holy grail of integration. In some scenarios, it increases coupling, often accidentally and to a dangerous degree. Orchestration is a better choice for some situations, but is often bashed for introducing tight coupling. I will debunk some of these myths and show how orchestration can even reduce coupling in some situations and totally work in an asynchronous, message-driven fashion. TLDR: Choreography vs. orchestration is NOT about choosing THE right approach. In real life, you need to balance both, so it is about choosing wisely on a case-by-case basis. In order to help you with that, I will walk you through the differences and give you some concrete guidance on decision criteria, backed by examples collected in various real-life projects.
Event-driven architecture is the pattern du jour in microservices world. But there’s more to event-driven than just asynchronous communication. Let’s talk semantics – what does “event” actually mean? Spoiler: Not everybody who uses the term means the same thing. It’s all too easy to get confused when people talk about Event Sourcing, Event Streaming, Event-Carried State Transfer, Notification Events, Domain Events, Fat Events, Event Storming and possibly yet other types of events. And above all – why should you even bother with an event-driven architecture, what are the benefits? Time for a proper clean-up. Let’s start with a clear and bounded definition of events, and from there explore the patterns of using events in micro- and macro-architecture, their benefits as well as challenges. After the talk, participants will know what questions to ask if someone suggests to go event-driven, and will be able to assess the applicability of different approaches to their architectural tasks.
A clean architecture is for greenfield projects is relatively easy. However, we usually work on legacy systems and an architecture must adapt in an evolutionary manner – otherwise it will also become legacy very quickly.
This talk will show different approaches how to improve legacy systems with domain-driven design. It will focus on different techniques for introducing bounded contexts and assessing where improvements are needed. In this way, Domain-driven Design becomes possible where it is needed most – in existing systems that are often very successful and critical from a business perspective, but were originally developed with no regard to DDD.
Compared to some of the IT industry’s more imaginative job descriptions, “software architect” appears to be a clear-cut role with a generally accepted set of duties and responsibilities. Upon closer inspection, however, this seems to be one of those things that everyone agrees on until one starts looking for a common consensus on the delineation of the role, tasks, and responsibilities of software architects.
For one thing, the day-to-day work of software architects is often characterized by having to fill other roles as well, such as project manager, requirements engineer, or lead programmer. On the other hand, there are organizations that actually strive for a clear differentiation between persons responsible for different architectural levels or domains. Lack of clarity about the role and responsibilities of software architects not only leads to risks in projects and reduced job satisfaction, it also implies different expectations about the training and skill set required for architects. This presentation will explore and analyze the various perceptions of the role of software architects, based on current literature as well as feedback from practitioners and training participants. Its objective is to make an informed contribution to the ongoing debate on relevant issues, such as: What is the actual core set of tasks and responsibilities? What are typical deviations from this and what are the reasons behind them? Which consequences does this have for the work of architects, their integration into the organizational context and their training?
Autonomous teams are something we often strive for in software projects. Moreover, autonomy itself is often considered a value without defining what it actually is. The talk will look at the question of team autonomy from the perspective of organisations. Can there be autonomous teams? What does autonomy mean within an organisation? Why does it happen that teams are considered as non-autonomous?
And why is the absence of autonomy still valuable? What is the connection between decisions and autonomy? And why does more autonomy inevitably lead to higher communication costs?
Research shows that on average developers spend about 58 percent of their time on reading code! However, we are not explicitly taught reading code in school or in boot camps, and we rarely practice code reading too. Maybe you have never thought about it, but reading code can be confusing in many ways. Code in which you do not understand the variable names causes a different type of confusion from code that is very coupled to other code. In this talk, Felienne Hermans, associate professor at Leiden University, will firstly dive into the cognitive processes that play a role when reading code. She will then show you theories for reading code, and close the talk with some hands-on techniques that can be used to read to any piece of code with more ease and fewer headaches!
Not taking into account what a piece of software represents in real life can lead to higher complexity, additional development costs, difficult refactorings, and ultimately, a software that no longer scales.
If it gets far enough out of hand, at some point, new features cannot be implemented if they’re not compatible with the current architecture, because they would be too expensive to implement.
In this session, we’ll look at some real-life examples and at some of the things to keep in mind in order to avoid the above-mentioned issues.
In various communities, several methods for the collaborative modeling of business requirements have been established in recent years. Well-known examples are EventStorming or Domain Storytelling. These approaches are based on achieving a better shared understanding of the business requirements in an interdisciplinary way. But what about the requirements for the quality of the software being developed?
This is where Quality Storming comes in, trying to bring together a heterogeneous set of stakeholders of a product or project to collect quality requirements. The goal is to gain a shared understanding of the real needs for the quality characteristics of a product. To achieve this goal, Quality Storming uses some techniques from various already existing collaborative modelling approaches.
It is not the claim to produce perfectly formulated quality scenarios with the help of Quality Storming. Instead, the method aims to create a well-founded, prioritized basis for later formalization, which is understood across different stakeholder groups. The more often teams work with the technique, the better the quality of this basis becomes over time. Advanced teams are quite capable of creating very well-formulated scenarios within the framework of such a workshop.
In this talk I will introduce the workshop and the ideas behind it. You will also learn many hints for facilitating such workshops and how to proceed with the learnings generated in Quality Storming workshops.
Large financial institutions are at a crossroads: from the nineties, the industry invested massively in first-class reliability and resilience, but modern tech is putting pressure, pushing innovation at breakneck speed, creating a fluid transaction ecosystem, and churning out ever-evolving sets of products and tech talent. In this talk, we will cover our experience liberating the data in the mainframe with a hybrid serverless/containerized solution built on AWS. It’s an architecture with tens of Java serverless functions that live-stream mainframe data to the cloud, using a suite of services capable of operating at hundreds of rps: DynamoDB, Lambda, API Gateway, Fargate. We will do a deep dive on various integration patterns and AWS services that can be used to support them. We will dedicate a special section at performance optimizations done to reduce the cold start problems and support aggressive performance targets. In the last section, we’ll use Lake Formation to create a data platform that can enable a host of new value-add activities for the entire company.
As software developers, we spend most our time maintaining existing systems – under time and budget pressure. Building new business functionality tends to get more difficult, expensive and risky over time due to increasing size, growing complexity and lack of overview. Although we complain about technical debt, lack of innovation and the architectural deficits of historically _grown_ software, we often patch, fix or hack symptoms instead of curing the root causes of these problems. In this talk you’ll get an overview of the systematically improving or modernizing your system. The approach shown here is based upon the established idea of identifying the specific problems first, before changing or modifying a system. We will take a closer look at different areas of investigation, such as architecture, code, technology, quality requirements, application data plus development and rollout processes, in an iterative breadth-first analysis. For each area of investigation I give examples and show methodical tools for effective and practical use. Afterwards you’ll get an overview of strategical and tactical approaches to specific improvements, based upon the problems and risks found during analysis. The presentation is aimed at software development teams, architects, product owners and technical management. Everything I present in this talk has been proven in software and system projects and reviews I conducted over the last couple of years in various industries – so expect some (anonymized) practical examples!
Microservices, and especially the event-driven variants, are at the very peak of the hype cycle and, according to some, on their way down. Meanwhile, a large number of success stories and failures have been shared about this architectural style. In this session, Allard elaborates on how to achieve the benefits of Event-Driven Microservices by not focusing on the Event-Driven aspect and avoiding Microservices, to begin with. He will discuss how a different way of looking at Messaging allows a system to gradually evolve, maybe with microservices as an end result. And maybe, after all, there is something about events that drives these services…
In this presentation, I will talk about my experiences, successes and failures with the arc42 architecture template in a DevOps team in a corporate environment with a product development focus.
Product development is often characterized by short iteration cycles and is therefore often operated in an agile manner, as in the speaker’s team. There the existing unstructured documentation was transferred to the arc42 template and stored in a wiki. In the course of time, it turned out that tooling plays a decisive role for the quality of the documentation and therefore switched to Docs-as-Code.
In the course of the presentation, the most important decision points for the current iteration of the technical software architecture will be discussed. These include the handling of “developer prose”, outdated documentation and the architecture decisions that are particularly important for a DevOps team. The integration into the methodical procedure Kanban was made possible with arc42 and a microsite based on AsciiDoc.
Not left out are the mistakes made, such as missing quality assurance of created documentation or the mixing of business and technical topics.
Software architecture emerged in the 1990s, and has been evolving ever since, from a directive, up-front activity, where a single architect created the architecture, which was then implemented by others, to today’s team based adaptive architectural approaches where architecture is a shared activity owned by the entire team. In this talk we’ll explore the architectural practices that deliver architecture as a “shared commons” which supports the Agile+DevOps ways-of-working needed for success in the digital age.
The clean code principles are well-known in modern, agile software development. But what has become the default for our business code, unfortunately by no means applies to our infrastructure code. Instead, we find badly crafted, complicated and highly tangled code that is manually tested using a trial and error approach. However, for modern cloud based systems the infrastructure code plays a crucial role. So it’s about time we being to treat it as a 1st class citizen! In this hands-on session we show several useful patterns, practices, tools and frameworks that help to write and craft clean infrastructure as code.
Today it is the world of Data Science. Ample amount of data is available which when utilized at right time in right manner can help to forecast as well as predict in advance any untimely failures/disasters that can cause serious and fatal losses. Out of many, one area where such predictions and Predictive Analytics Software can be of great use is, manufacturing/process industries like power plants, oil and natural gas and many such more.
Predictive Analytics Software not only has components like main stream software-intensive systems, but it also has statistical algorithms, tools, techniques, mathematical components for pattern recognition as well as other techniques like machine learning, artificial intelligence, modelling etc. It also has diverse stakeholders and other connected systems.
Software Architecture is well practiced in main stream software-intensive systems like web, embedded, enterprise applications etc. However, Predictive Analytics being emerging branch, there is a lot of scope for research and enhancement in Software Architecture concepts with respect to such software systems.
With my experience working with Predictive Analytics Software for power plants, in this paper I will talk about following 3 points:
1. Benefits of practicing Software Architecture in Predictive Analytics Software
2. Challenges in using Software Architecture in Predictive Analytics Software
3. Future ahead
Many software-developing organisations adopt DDD and apply strategic design to map out bounded contexts based on domain understanding to build services and applications within those contexts.
Teams have come to appreciate hexagonal architecture as a great approach to isolating the domain within a microservice or an application.
But that cannot be the end of the story – successful applications grow, people learn and the world changes. Bounded contexts will require adjustment, be split or abandoned – and that requires modularity within their domain cores.
I want to show an example of how hexagonal architecture and domain-driven modules go together and how such an architecture can be visualized and organized.
Most modern software teams strive for Continuous Delivery of business impact with a DevOps mindset: you build it, you run it. With short iterations and continuous feedback loops, teams deploy new software to production daily.
But how about the role of a software architect in such a fast-paced world? With daily deployments, is there even time for software architecture? As an architect, how do you prevent being a delaying factor to the pace and success of a team? And how do you keep up?
In this session, I’ll share my experiences as a software architect in the DevOps world. I’ll talk about “just enough” architecture and moving from up front design to evolving architecture.
After this session, you’ll have practical insights and tips in how to work as an architect with a DevOps team.
As a growing number of industries turn their focus on climate change, innovating in order to do their part on the journey to Net Zero – how does software engineering fit into this picture, with the industry handcuffed to it’s consumption of resources? In this talk we will dive into the various resources required to develop and host modern software, as well as the ways in which we can reduce our impact on the environment through architectural choices.
Leading indicators are the metrics providing us hint on the product quality during development cycle. Question is can we get them more accurate? we try to answer this question with Orthogonal Defect Classification (ODC) which is an important data set, that can provide insights into weak process areas of SW development, design areas requiring attention. The talk will focus on how ODC parameters can help in improving leading metrics of SW quality.
The package structure you choose has a great influence on the architecture and maintainability of your software system. It lays the foundation for whether your application remains manageable in the long term or becomes a big ball of mud. In this talk, we will show what matters.
The package structure is the basic structure of object-oriented software systems. It is not only the way of grouping classes, but also relevant for every developer in the course of their daily work. Package structures help to quickly grasp and understand structures within the application. Is it possible to derive the functionality based upon the package name and to talk about the system on the functional level? A meaningful structuring of the application helps in the daily work, in the implementation of new requirements and in maintenance. This is due the fact, that a higher implementation speed can be achieved. In many projects, the package structure is based on the stereotypes of classes such as controllers, services or factories. This technical structuring is an intuitive procedure in smaller software systems, which leads to considerable disadvantages in larger software systems like an increase of technical depts. Reasons for that are the resulting lack of system understanding which leads to unclear responsibilities, undesired dependencies, cycles and high complexity. Finally, this causes applications to erode unnoticed resulting in a reduction of productivity.
An alternative way of system decomposition helps to avoid the listed negative effects. Focusing on the mental model of the user and the developer, leads to a functional system decomposition. This will be discussed looking at use cases, which everyone can understand, and which illustrate real business transactions.
Team diversity refers to differences between members of startup team. Those differences can include demographic differences (like age, race, sex, ethnicity), personality (extrovert, introvert, and differing Myers-Briggs types) and functional (as in skill sets, like engineering, design, copywriting, and marketing).
How does team diversity impact your customers’ experience from the moment they learn about you through their journey with you?
You will attract and relate to customers how look like you. They will understand your messaging and you will understand their needs. If you don’t represent the right dimensions of diversity, you are leaving an amazing experience behind.
What if what we think makes a great leader is all wrong? When you ask people to think of leadership qualities, they tend to choose words like bold, powerful, and fearless. But if you ask people to name traits of the leaders they would like to emulate and/or follow; you get a different list. In fact, from my research, bold, powerful, and fearless don’t break anywhere near the top of the list. So, you have to ask yourself, are you trying to adapt the traits you think will make you a great leader or do you want to cultivate the qualities that people are looking for in a leader they want to follow? Based on my last couple of years of research across many demographics, I’ve compiled a list of qualities others look for in leaders. My message is that because you already possess the traits to be a great leader, you can unleash the power to do great things. This past year and half have presented the world with many challenges that I think we can overcome with strong, genuine leadership. Revolutions and significant changes always start with an individual. We all have a role to play and becoming a genuine leader is the start.
Embedded Real-Time Systems, especially such with specialized hardware, pose a lot of additional architectural challenges compared to commercial software architectures: Technology trade-offs between hardware and software, qualities like availability that can only be solved in conjunction between hardware and software, hard real-time requirements, … Using an industrial system (called Traffic Pursuit System, mounted in police cars to trace traffic offenders) as an example, this talk demonstrates hardware/software-codesign and its documentation in the proven arc42 template. We will demonstrate how the template can be used to capture both, hardware and software design (and their alignment), how hardware and software interfaces can be modeled and how system design decisions can be captured.
Special emphasis will be put on demonstrating architectural decisions to fulfil specific quality requirements (like accuracy of the measurement, robustness of the overall system and ease of use for police officers).
As the worlds of RESTful APIs and asynchronous events converge, it is clear that organizations struggle with understanding and designing highly-integrated, highly-distributed systems. This session will teach attendees a new, visual approach to integration design and analysis that includes synchronous APIs, asynchronous events, and other integration methods such as batch and streaming.
Many companies focus on technological questions when transitioning from traditional IT infrastructure to cloud computing. Yet, not all practices, processes, and policies fit cloud-related concepts like DevOps, DevSecOps, PaaS and serverless, zero-trust networking, etc. In this session, Rainer Stropek shares his views on organizational aspects that are crucial for larger organizations that want to benefit from cloud-native software development in hyper-scale public cloud environments.
Anti-Patterns are like patterns, only more informative. With anti-patterns you will first see what patterns reoccur in “bad” retrospectives and then you will see how to avoid, or remedy, the situation. Based on her experience with facilitating retrospectives, join Aino for an entertaining and informative presentation on the anti-patterns she has seen and how to overcome the problems. This talk is focused on retrospectives, but will be interesting for everyone facilitating any kind of meeting.
“Big design up front is dumb. Doing no design up front is even dumber.” This quote epitomises what I’ve seen during our journey from “big design up front” in the 20th century, to “emergent design” and “evolutionary architecture” in the 21st. In their desire to become “agile”, many teams seem to have abandoned architectural thinking, up front design, documentation, diagramming, and modelling. In many cases this is a knee-jerk reaction to the heavy bloated processes of times past, and in others it’s a misinterpretation and misapplication of the agile manifesto. As a result, many of the software design activities I witness these days are very high-level and superficial in nature. The resulting output, typically an ad hoc sketch on a whiteboard, is usually ambiguous and open to interpretation, leading to a situation where the underlying solution can’t be communicated, assessed, or reviewed. If you’re willing to consider that up front design is about creating a sufficient starting point, rather than creating a perfect end-state, you soon realise that a large amount of the costly rework and “refactoring” seen on many software development teams can be avoided. Join me for a discussion about the lost art of software design, and how we can reintroduce it to help teams scale and move faster.
Sponsoring a conference is a terrific way to support and connect with our global community of software architects.