Couverture de Appian Rocks

Appian Rocks

Appian Rocks

De : Stefan Helzle
Écouter gratuitement

À propos de ce contenu audio

Where No Code Has Gone Before© 2022 Stefan Helzle
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • Digital Process Ethics
      Jan 21 2026
      In this episode of Appian Rocks, Stefan, Sandro, and Marcel dive into a conversation that starts with the seemingly dull topic of software reviews but quickly evolves into a deep and thought-provoking discussion about ethics in digital process automation. Initially, they touch on the typical components of a code review—adherence to best practices, syntax, node counts in processes, and test cases. However, they challenge the narrow scope of this approach, questioning whether technical correctness alone is sufficient, especially when the software influences real-world decisions in complex environments. The conversation shifts to the broader context in which applications operate, especially in public sector projects. The team notes that stakeholders such as the funding agency, the users, and the beneficiaries are often different entities, each with distinct priorities. This creates a tension where developers can find themselves caught in the middle. While developers are typically not policy makers, the code they write can enforce rules and decisions that significantly affect people's lives. This leads to a central theme of the episode: software is not neutral. It embodies decisions, and those decisions can have ethical consequences. They explore how public sector automation transforms discretionary, human-driven processes into rigid, rule-based systems. This transition, while increasing efficiency, risks stripping away the nuance and empathy that experienced civil servants once applied. For example, decisions about child support or eligibility for government aid, which were previously made by humans considering context and individual circumstances, are now reduced to logic gates and business rules. The trio argues that this change demands new layers of oversight—beyond testing whether a process works, teams must ask whether it works *fairly* and *justly*. A particularly striking point raised is the lack of ethical audits in most software development projects. Stefan admits he’s never performed one, and the group collectively questions why such audits aren't standard practice. Is it because they were never needed? Or is it because ethical responsibility was previously embedded in human roles and not in the tools themselves? They agree that developers, especially solution designers and business analysts, have a duty to consider the broader impacts of their implementations. The discussion also touches on traceability and transparency. Marcel introduces the concept of traceability as a critical requirement, particularly in government software. Every feature in an application should be traceable back to a signed-off requirement to ensure accountability. This is essential not only for auditing but also for safeguarding citizens’ rights when decisions are automated. Transparency, too, is highlighted as a core value—systems should provide users with understandable explanations for decisions, such as why a child support claim was denied. As the episode closes, the hosts underline the need for ethical codes within development teams. Guidelines alone aren’t enough; teams must establish practical escalation paths and support for developers who encounter ethical red flags. Developers should feel empowered to say no to unethical requests and escalate questionable requirements. Ethical responsibility, they stress, belongs to everyone involved—not just legal or compliance departments. Ultimately, this episode calls for a shift in mindset. In an era where software often replaces human discretion, ethics must become a first-class concern in digital process design. Developers, architects, and analysts need to see themselves not just as implementers of logic, but as stewards of values that impact real lives.
      Afficher plus Afficher moins
      1 h et 2 min
    • Dealing with External Data Models
      Nov 6 2025
      In the latest "Appian Rocks" podcast, host Stefan, Sandro, and Marcel discussed managing external data models in Appian. They focused on Data Transfer Objects (DTOs) for abstracting and transferring data between incompatible systems. Marcel, a solution architect, highlighted the challenge of integrating external data, whether from microservices or legacy systems, and questioned forcing a single business object model across an enterprise. The conversation explored communication methods and the common scenario of Appian performing internal data transformations. Stefan emphasized that Appian often needs only a subset of external data. Marcel explained that a central translation layer for DTOs could consolidate logic, preventing widespread changes if a DTO evolves. They also mentioned API composition and anti-corruption layers (ACLs), which facilitate communication between systems using their own data models, with translation in the middle. Marcel likened DTOs to "DHL packages" for data, while ACLs help reduce transferred information, adhering to the "need-to-know" principle. Stefan pointed out the fundamental difference between process-driven Appian systems and data-storing backends. Marcel added that highly normalized external data might require denormalization for Appian UI performance. They also covered various forms of coupling, including data format, interaction style, semantics, order of operations, network location, temporal coupling, and network topology. Stefan shared an anecdote about time zone issues causing data discrepancies. Sandro presented a "war story" about enriching read-only external customer data. Stefan immediately suggested Appian's sync records as a solution for creating cached local copies and enhancing query speed. Marcel agreed, comparing it to a materialized view. When Sandro revealed that API-based integrations across multiple unreliable source systems led to instability, Marcel proposed an API Composer service with caching and retry mechanisms. Stefan countered that Appian's synced records can now handle unsuccessful or partial syncs. They concluded that data duplication is a pragmatic approach, especially for low-priority reference data or when sensitive data shouldn't reside directly in Appian. While reliable software is costly, local data duplication can be a cost-effective solution for individual applications. The crucial factor for data duplication is ensuring awareness of changes to keep the cached data current. Marcel, despite his skepticism, acknowledged that synced records effectively solve common problems in an approachable way, aligning with Appian's platform philosophy.
      Afficher plus Afficher moins
      46 min
    • AI Contextualized
      Aug 27 2025
      In this episode of Appian Rocks, Stefan, Sandro, and Marcel tackle the controversial role of artificial intelligence in process implementation projects. While acknowledging AI’s impressive capabilities, they warn against the industry’s tendency to treat it as a universal solution. What demos well in sales meetings often falls short in practice, producing answers that only sound competent. The hosts argue that uncritical adoption leads to laziness, outsourcing of judgment, and a dangerous decline in deep problem-solving skills. Marcel frames the issue as the “hammer and nail” problem: with AI marketed as the hammer, everything starts looking like a nail. This obsession can stifle thoughtful analysis and push teams to skip the hard work of understanding processes. Stefan illustrates this with a client case where rethinking and simplifying steps—without AI—halved the workload. The real benefit came not from automation but from owning the thinking and redesign. If a team relies on a chatbot instead, it risks losing both control and learning. Still, the hosts emphasize that AI has valuable use cases, particularly where input is noisy or unstructured. Summarizing long documents, extracting fields from messy scans, or parsing communication are areas where probabilistic language models excel. But when data is already structured and clear, adding AI can actually reduce quality. As Stefan puts it, “the best part is no part”—if a step adds no value, eliminate it rather than overengineering with AI. The conversation then broadens to the societal and environmental costs of AI overuse. Marcel highlights the immense energy and water consumption of data centers, noting that a single AI query is vastly more resource-hungry than a standard Google search. Sandro compares the phenomenon to refrigerators: once they became widespread, people stopped considering older preservation methods and even began misusing fridges for foods that spoil faster inside them. Likewise, if developers only learn to solve problems through AI, they may never develop alternative methods, filling the industry with people who know no tools beyond the “fridge.” The panel also warns about economic risks. Current AI feels cheap because of heavy investment subsidies, but providers will eventually move to value-based pricing, charging for “man-hours saved.” This could trap organizations in costly dependencies once AI is deeply integrated into core processes. Consultants, they argue, must therefore frame adoption not only around use-case justification but also total cost of ownership, including volatile token-based pricing. In closing, the hosts underline that AI should be one tool among many. Its convenience is undeniable, but convenience alone is no justification. In low-code environments like Appian, the temptation to lean on AI for speed is strong, yet true transformation still requires creativity, critical analysis, and ownership of solutions. Overuse risks fragile systems and a loss of craft. For now, they agree: AI is powerful and promising, but it must be applied sparingly, thoughtfully, and only where it adds real value.
      Afficher plus Afficher moins
      54 min
    Aucun commentaire pour le moment