"Deutero-learning" (Learning II) and GitHub Flow!

 This is a fantastic and deep question, connecting Bateson's "Deutero-learning" (Learning II) to the practicalities of GitHub Flow!

Bateson's Learning II is about learning to learn – changing the context or frame in which you learn, rather than just learning specific responses within a fixed frame.1 It's about changing your habits of expectation, recognizing the patterns of your own learning, and adapting your approach to problems. In a team context, it's about shifting the collective "rules of the game" or the assumptions about how you learn and operate.

Learning I: (Simple change within a frame) "Oh, I made a mistake on line 27; I'll fix that."

Learning II: (Changing the frame) "We keep making the same type of mistake (e.g., security vulnerabilities, performance bottlenecks) in every module. Our process for testing/designing must be flawed, or our understanding of security/performance needs a fundamental shift."

Within a GitHub context, Learning II isn't about a single pull request, but about evolving the team's shared mental models, conventions, and automated processes based on patterns observed in the flow of work.

Here's how to understand Learning II in a GitHub Flow context, broken down into a "micropractice" at this higher, meta-level:


Learning II (Deutero-Learning) in GitHub Flow: The Meta-Micropractice

This isn't a coding micropractice, but a team process micropractice that leverages the transparency and data provided by GitHub.

Goal: To identify and adapt the underlying assumptions, rules, and patterns of the team's development process, tools, or architectural principles, based on recurring issues or opportunities.

Phase 1: Pattern Recognition & Anomaly Detection (The "What" is the Pattern?)

  1. Systematic Observation of Recurring Failures/Frictions in GitHub Flow:

    • What: The team actively monitors patterns in their GitHub activity, looking for:

      • Recurring Bug Types/Categories: (e.g., "We consistently have PRs breaking database migrations," "We always have off-by-one errors in data processing logic.") – often identified via GitHub Issues labels and trends.

      • Frequent Code Review Cycles/Bottlenecks: (e.g., "Every PR that touches our authentication module takes forever to review," "We always argue about naming conventions.") – observable in long-lived PRs, excessive comments.

      • Repeated CI/CD Failures of a Certain Type: (e.g., "The integration tests for X service frequently fail due to environment setup issues.") – seen in GitHub Actions logs/PR checks.

      • Persistent Architectural Violations: (e.g., "Despite our Clean Architecture intentions, new developers keep introducing dependencies from the UI layer directly to the database.") – observed in code reviews, static analysis reports linked in PRs.

      • Inefficient Practices: (e.g., "We always forget to update documentation for new features," "Our release process is manual and error-prone.")

    • Why GitHub Context: GitHub provides the data for this observation: Issues, Pull Request history, CI/CD logs, commit messages, code diffs, discussions.2 These are the "signals" that a deeper pattern might exist.

    • Micropractice Element: "The Weekly Pattern Retrospective": A dedicated time (e.g., 15-30 mins) in the weekly sync to collectively ask: "What patterns of friction or recurring issues did we observe in our GitHub flow this week/sprint?"

  2. Hypothesis Formulation (The "Why" is this Pattern Happening?):

    • What: Once a pattern is observed, the team collaboratively brainstorms potential underlying causes or faulty assumptions. This moves beyond individual blame to systemic thinking.

      • Example 1: "Our auth PRs are slow to review because there's no clear pattern for how new auth features should integrate."

      • Example 2: "Developers keep violating architecture because our linters don't catch it, and the architecture isn't clearly documented/enforced."

    • Why Bateson LII: This is the critical "meta" step – identifying the frame (the current understanding, process, or architectural pattern) that is leading to the repeated "errors" (Learning I failures).

    • Micropractice Element: "Root Cause Pattern Analysis": During the retrospective, dedicate time to drill down on one identified pattern. "What assumptions are we making that lead to this? What process or tool is inadequate?"

Phase 2: Frame Re-Evaluation & Experimentation (The "How" do We Change the Frame?)

  1. Proposing a "Meta-Change" (A New Rule/Tool/Convention):

    • What: Based on the hypothesis, the team proposes a change to their process, tooling, or architectural convention that addresses the underlying pattern. This isn't a code change, but a change to how code is developed or reviewed.

      • Example 1: "Let's establish a formal 'Authentication Feature Development' checklist and template for future PRs."

      • Example 2: "We need to implement a new static analysis tool (e.g., dependency-cruiser in JS, custom linter in Python) that explicitly checks for architectural violations in CI/CD."

      • Example 3: "We will commit to making smaller, more frequent commits for all PRs to improve review speed."

    • Why GitHub Context: These proposals often manifest as:

      • New entries in a CONTRIBUTING.md file (committed to Git).

      • Updates to GitHub Actions workflows (YAML files in .github/workflows/, committed to Git).

      • New pre-commit hooks or linter configurations (files committed to Git).

      • A new issue created to track the implementation of the meta-change itself.

    • Micropractice Element: "The 'Meta-PR' Proposal": A team lead or a volunteer creates a PR (sometimes just a document, sometimes actual code for a new tool/config) that formally proposes the meta-change. This PR becomes the discussion forum.

  2. Implementing and Observing the "Meta-Change":

    • What: The proposed meta-change is implemented (e.g., a new linter is configured, a new PR template is created, a new team convention is adopted).

    • Why GitHub Context: The implementation itself is often committed to Git as regular code/config changes. The effect of these changes is then observed in subsequent PRs and development cycles on GitHub.

    • Micropractice Element: "Adopt & Monitor": The team actively tries to adhere to the new convention or utilizes the new tool. During subsequent development cycles, they consciously monitor if the identified pattern of friction has decreased.

Phase 3: Integration & Reinforcement (The "How" do We Embed New Learning?)

  1. Reflecting on the Impact of the "Meta-Change":

    • What: After a period of trying the new process/tool (e.g., a sprint or two), the team formally reviews its effectiveness. Did it actually solve the underlying problem? Did it create new problems?

    • Why GitHub Context: This reflection can happen by reviewing GitHub data again: Are auth PRs now faster? Is the number of architectural violations decreasing? Are CI failures of that type less frequent?

    • Micropractice Element: "Impact Review & Adjustment": During the next Weekly Pattern Retrospective, dedicate time to review the impact of the previously implemented meta-change. "Did the new PR template reduce review time for X?"

  2. Reinforcing the New Frame or Iterating:

    • What: If successful, the new process/tool/convention becomes an ingrained part of the team's development "muscle memory" – the new "rule of the game." If unsuccessful, the team iterates on the meta-change or discards it and tries a different hypothesis.

    • Why Bateson LII: This is where the deutero-learning solidifies – the team has genuinely learned to learn differently about this specific type of problem. Their internal model of how to approach this kind of development challenge has shifted.

    • Micropractice Element: "Document & Train": Successful meta-changes are formally documented (e.g., in the project Wiki, CONTRIBUTING.md, or internal team handbook) and new team members are onboarded to these evolved practices. If the change was unsuccessful, the cycle restarts with a new hypothesis.


In essence, Learning II within GitHub Flow is about using GitHub's transparency and version history as a data source to diagnose systemic issues in the way the team builds software, and then collaboratively proposing, implementing, and validating meta-level changes to their processes, tools, or shared conventions. It moves beyond fixing individual bugs to fixing the underlying patterns that create them, leading to a continuously improving, more adaptive development team.

Comments

Popular posts from this blog

Using throw away app to help me get back into the vibe space post stack/structure/perfection enlightenment

Code-Gurus Wanted: Bridging the gap - supporting the transition.

Blogger HTML Transformation: The Ironist's Field Guide