A Skeleton DevOps framework in Python for Epicor Kinetic - How I Manage Scripts and API secrets

Are you asking: “Who isn’t messy?” If so, I don’t know anyone who isn’t, but that doesn’t stop people from being judgmental, dismissive, hostile and far too often cruel to others they deem beneath them.

If that’s not the direction you were headed, I didn’t follow your chain of thought and ask that you elaborate so I can attempt to understand.

  • Kevin
1 Like

Sorry that the AI comments derailed the thread. I can say for certain that Jose nor Simon thinks that you are beneath them. You know the saying, “Only a friend will tell you if you have a booger on your nose?” We know you here. You have a strong voice here, and we’d like to hear it. This AI thing is new and we’re all getting used to it. People who used Google Glass were called Glassholes, but now people are wearing Meta Ray Bans and nobody blinks. (And if they do, nobody can’t see it! :rofl:) We’re at the Google Glass period of AI. As Gabe said, once we can add our voice to some of the AI prompts, it will again separate you from the bland, neutral output of every other AI response. We’ll get there. We always do.

I’m ALWAYS interested in conversations related to DevOps and security. You have put some serious thought into the security by shifting early!!! YOU have built something interesting and it has value. Let’s keep this conversation going. You have a lot to offer.

2 Likes

Hello @Kevin_Barrow No insult intended at all, I was empathising :anguished_face: , I’m the same, and was really being judgemental of myself, if anything. I find I fight that battle on a regular basis, if not daily. Apologies if it was misconstrued.

Epiusers, I 100% agree, should be a place of Psychological Safety at all times.

3 Likes

I want to be clear and explicit—I’ve never felt that anyone here looked down on me, nor have I taken offense to any responses. I tend to communicate transparently and, at times, bluntly, which I realize can be misread.

I apologize for any friction I have caused; it certainly wasn’t intentional. I truly appreciate the feedback and the different perspectives—they only make the framework better, even the side quests have value as they can help reframe the narrative or overcome communication barriers.

  • Kevin
8 Likes

Gabe, your point on personalized training is well-taken; it’s the logical next step for my composition workflow. In the meantime, I’ve implemented an architectural mandate (prompt engineering) to bridge that gap immediately.

I call it the Agency-Resolution Protocol. It treats the AI persona as a Non-Newtonian Fluid: it remains fluid for creative tasks but Shear-Thickens into high-density technical rigor the moment it encounters corporate fluff or unrequested prose.

The Agency-Resolution Protocol

This ensures output maintains Technical Momentum over artificial grace. It is already preventing the “Skeleton” of this DevOps framework from being polluted by AI-padding or low-density prose.

1 Like

I think AI has it’s place in the world but it does give you wrong answer (quite often)…

Example: Ask it how many 'r’s are in strawberry.

I’m also waiting for the day it’s all leaked..

Write sloppy code you’ll improve.. like jesus look at my starting out posts.. I basically hounded @hkeric.wci until he would help me.. :smiley:

Finishing up with my input… i’ll leave you with boris.. Boris Johnson Says He Loves ChatGPT - YouTube

PS. @josecgomez Way to go starting an anti-ai cult and leaving.. next you’ll have @klincecum telling us how he managed to harness the power of the sun in his pocket watch.. :face_exhaling:

2 Likes

Steve Brule GIF by MOODMAN

I do occasionally watch Styropyro on youtube…

I dislike writing documentation, it’s easier to ask AI to do it, then I only have to iterate until I get the results I desire, either through follow up prompts, or direct edits. AI is good at drafting things, but AI output always needs to be reviewed and fact checked. Although most things you read/write ‘should be’ reviewed and fact checked anyways.

  • Kevin

100% - Like I said it has it’s place.

1 Like

Mark did a fantastic job of providing a comprehensive list of pain points. One that really stuck with me is how much manual effort it takes for admins to keep track of what code exists in each environment. That’s where drift creeps in — eventually nobody is 100% sure what’s actually deployed where, or what should be deployed.

At its core, this isn’t a tooling problem so much as a state‑management problem.

What I’m trying to do is a sanity‑check; I want, ideas, comments, roadblocks and general thoughts:

Does explicitly separating intent, expectation, and reality actually line up with how you want to work? And where would this break down in real‑world Kinetic usage?

This isn’t meant as a prescriptive “do this” solution, but as a way to reason about the problem space.


1. Proposed Architecture

Problem framing

  • Environment state in Kinetic is non‑deterministic unless it is actively reconciled.
  • Any deploy‑only model will fail over time because environments inevitably mutate outside pipelines (UI edits, emergency fixes, cloud‑side actions).

Core principle

  • Explicitly separate intent, expectation, and reality.
  • Never allow a single artifact to claim to be “the truth.”

Structural solution

A single repository with clearly separated responsibilities:

  • /projects folder — authoritative deployable units
  • Master fork of /projects — environment‑neutral canonical source
  • Environment branches — declared deployed state pointers
  • Environment manifests — expected state contracts
  • Backup forks — observed reality snapshots

The Project Folder Master Fork

The /projects folder is not a convenience directory; it is a governance boundary.

It contains:

  • Script bundles
  • Report packages
  • Function artifacts
  • Any deployable Kinetic unit that can be meaningfully versioned

Critically, /projects is:

  • Environment‑agnostic
  • Credential‑free
  • Free of environment assumptions
  • Ignored by kinetic_devops, so local experimentation cannot mutate deployable state

This allows /projects to function as a pure artifact registry, not a working directory.

The master fork (or protected branch) of /projects is the only source from which Training and Production promotions are allowed to occur. No environment may receive anything that does not already exist here.

It defines:

  • What can exist anywhere
  • What may be promoted
  • What is considered reviewable work

Developers do not deploy “their branch” into higher environments. Pipelines deploy approved artifacts. Promotion is a controlled reference move, not a file copy.


Dev as an Integration Environment (Feature Branch CI/CD)

Development environments serve a different purpose than Training or Production: they are integration platforms, not promotion targets.

As such:

  • Dev environments may accept deployments from short‑lived feature branches, constrained to /projects
  • Training and Production environments may not

This enables full CI/CD against a real Kinetic target before changes are merged into main, while keeping main continuously releasable.

Feature branch deployments are explicitly non‑authoritative:

  • They do not advance promotion pointers
  • They do not redefine expected state
  • They exist to validate changes prior to merge

Environment‑Driven Deployment Policy

Deployment eligibility is environment‑owned, not pipeline‑hardcoded.

Each environment declares its own deployment policy via secure metadata (e.g., Keyring), such as:

  • IsProduction
  • AllowedDeploymentSources (FeatureBranch, Main, ReleaseTag)

Pipelines evaluate this contract rather than embedding branch logic. This allows:

  • Dev to accept feature branch deployments
  • Training to behave like Prod when desired
  • Production to require stricter sources (e.g., signed release tags)

The environment declares what it will accept; the pipeline merely enforces it.


Lifecycle

  • Dev work → feature branch under /projects
  • CI → build/test on PR
  • Deploy‑to‑Dev → allowed from feature branches (integration testing)
  • Merge → PR into main after Dev validation
  • Promote → pipeline deploys from protected /projects fork
  • Validate → environment checks
  • Record → environment branch + manifest update
  • Observe → explicit pull/export from environment
  • Reconcile → diff against master fork and manifests

Invariant

  • Humans propose.
  • Pipelines promote.
  • Environments are observed, not trusted.

Intent, Expectation, Reality

This model deliberately distinguishes three independent states:

  • Intent — what we believe should exist, expressed through Git history and review
  • Expectation — what an environment contractually claims to contain, expressed through manifests
  • Reality — what is actually present, obtained only through observation and export

None of these is allowed to overwrite the others silently.


Environment Branches as Pointers, Not Workspaces

Branches such as env/dev, env/Training, and env/prod do not contain evolving code. They contain references.

They are advanced only by automation, and only after:

  1. Deployment succeeds
  2. Validation passes
  3. The environment is confirmed to match the deployed artifact

They answer a single question:

“What version did we last declare as deployed here?”

They are historical markers, not collaboration surfaces.


Environment Manifests as Contracts

Manifests sit orthogonally to branches.

They describe:

  • Expected artifacts
  • Expected versions or hashes
  • Optional fingerprints of deployed state

They are designed to be violated.

When reality diverges from a manifest, that divergence is surfaced rather than hidden. Drift becomes observable and discussable instead of anecdotal.


Pull, Snapshot, Reconcile (The Non‑Negotiable Half)

The model explicitly supports pulling from environments.

A pull operation:

  • Exports the live environment state
  • Stores it in a backup fork or protected snapshot branch
  • Computes diffs against:
    • The environment branch (declared intent)
    • The environment manifest (expected state)
    • The master /projects fork (approved artifacts)

Nothing is overwritten. Nothing is “fixed” automatically.

Unexpected changes become evidence, not embarrassment.


Why the Backup Fork Exists

The backup fork is not a rollback mechanism.
It is not a workflow tool.
It is forensics.

It answers:

  • “When did this change first appear?”
  • “Was this ever reviewed?”
  • “Did this exist before promotion?”

In regulated or audit‑sensitive environments, this alone justifies the model.


Resulting Operating Model

  • Developers propose changes to /projects
  • Dev environments support CI/CD from feature branches
  • Pipelines are the only entities allowed to promote
  • Training/Prod deploy only from approved sources
  • Environments are continuously observed
  • Drift is reconciled explicitly
  • State is derived, not assumed

This does not eliminate complexity.
It names it, constrains it, and makes it survivable.

1 Like

Update: Direct CLI & Automated Solution Migration

The framework is now a formal package.

1. The “Zero-Impact” Install

Using uv allows you to run the SDK in an isolated environment without polluting your global Python site-packages:

# Create an isolated venv and install the SDK in one go (once)
uv venv
uv pip install kinetic-devops

#activate the environment (once per session)
./.venv/Scripts/activate

# Run it anywhere
kinetic-devops --help

Note: The kinetic-devops command is now a first-class entry point. No more python -m prefixing.

2. Solving the “Solution Workbench” Nightmare

With the core goal is moving away from “Click-Ops.” I’ve added specific tooling to handle some common migration failures:

  • kinetic-devops zdatatable: Detects and syncs UD Column drift. It compares ZDataTable XMLs against the target environment to catch field conflicts before the Solution Workbench fails.
  • kinetic-devops solutions: Automates the backup and recreation of Solution Workbench definitions, ensuring .cab and .zip files are reproducible.
  • kinetic-devops meta: Programmatically fetches UI metadata and handles core layer import/delete operations.

3. Windows Developer Bootstrapping

For those on Windows, bootstrap-dev.ps1 in the repo root automates the full stack:

  • Installs uv, gh, and VS Code.
  • Injects “educational shunts” into your PowerShell profile to guide you toward uv and git best practices.

4. Refactored Routing

The internal dispatch has been overhauled for better CLI discovery. You can now get deep help on any submodule:
uv run kinetic-devops baq --help
uv run kinetic-devops solutions --help


Next up is hopefully refining the efx and report modules to get us closer to true “Configuration as Code” for Kinetic. Give the uv install a shot—it’s the fastest way to get the SDK live.

1 Like

I wanted to share a major milestone regarding the framework evolution. I just used v0.1.0a4 to execute a full environment promotion for a Solution containing a Kinetic UI customization and multiple BPMs.

The workflow followed a rigorous verification loop:

  1. Dev → Test: Initial promotion and functional testing.
  2. Environment Wipe: Manually wiped the Dev environment entirely to test ‘clean state’ deployment (mimicking Prod).
  3. Redeploy & Validate: Reinstalled the Solution definition and the build back into the clean Dev environment to ensure the package was reproducible from production once deployed.
  4. Production Push: Promoted the verified package and the Solution definition to Prod.

The current solutions.py logic successfully handled ZDataTable and ZDataField conflicts automatically. In this case, Dev and Prod shared some legacy custom fields, but Dev had several new columns. The framework resolved these schema differences on the fly, ensuring all new columns were added to the target environment without the manual overhead or the ‘already exists’ errors that usually haunt the standard import process.

Since the 04/02 updates:

  • Pre-flight Validation: The tool now programmatically scans build logs and validation files for errors before finishing an install.
  • Automated Schema Resolution: Verified handling of custom field conflicts during deployment. Technical Note: The module leverages Ice.BO.ZDataTableSvc to programmatically reconcile schema differences before the push.
  • Layer Conflict Resolution: Added logic to automatically clear ‘exists in another company’ (e.g. tenant global MetaUI) errors via MetaFXSvc.
  • Redeploy Reliability: Confirmed that the framework can rebuild solution state from a clean slate or in a new environment!

The code for this lifecycle is now live in the repo. If you’re tired of the manual friction in Solution Workbench, I’d love for more eyes on the current solutions.py logic—it’s proving to be a huge time-saver for after-hours deployments."

1 Like