A Skeleton DevOps framework in Python for Epicor Kinetic - How I Manage Scripts and API secrets

After working on and updating ExportAllTheThings by @klincecum, I was inspired to get this out there and see how others are tackling the broader DevOps challenges in Kinetic.

I’ve been refining a “Bootstrap” process for our environment and put together a skeleton repo that might be useful for anyone else tired of managing local environments or (worse) having API keys floating around in disparate plain text scripts.

The repo is here: https://github.com/SteffesKBarrow/kinetic-devops - GitHub - Kinetic DevOps

Getting Started

The Problem:

Traditional Kinetic customization often involves manual exports, plain-text credentials in scripts, and inconsistent local environments. This makes migrating and scaling a team while maintaining security difficult.

What this Framework solves:

  1. Environment Consistency (uv + Scoop): ./scripts/bootstrap-dev.ps1 uses uv and Scoop to automate the dev stack setup. I included an init options that shunts pip and python commands via PowerShell functions added to the users $PROFILE. This encourages the team into project-isolated environments (uv run) so we prevent running into “it works on my machine” issues caused by global package conflicts.
  2. Zero-Knowledge Security: Instead of just calling the Credential Manager, the auth.py logic implements a layer of PBKDF2+AES-256 encryption. Secrets can optionally be encrypted locally before storage, ensuring that even with access to the OS vault, the raw keys aren’t exposed without the secondary derivation. This keeps secrets out of Git entirely while maintaining a “Zero-Trust” posture for the user.
  3. Multi-Environment Token Identity: To handle the mess of switching between Dev, PILOT, and Production, the framework uses a “Triple-Locked Identity” for bearer tokens (Environment + User + API Key Hash). This prevents token collisions and ensures that scripts automatically pull the correct, limited-scope credentials based on the active environment context.
  4. The Kinetic SDK (BAQ & BO Wrappers): Beyond the setup, the core payoff is a set of clean Python wrappers for the Kinetic REST API. This includes a BAQClient for high-performance data retrieval and BOReader wrappers that simplify interacting with Business Objects, turning complex REST calls into standard Pythonic patterns.
  5. Deterministic & Portable Infrastructure: While the bootstrap is in PowerShell for ease of use on Windows, the rest of the infrastructure is platform-agnostic. The environment is portable to WSL or native Linux, and the uv.lock file ensures that you are using the exact same dependency tree regardless of the OS.

Developer Experience

The shunt system (PowerShell profile injection) educates instead of restricts—if you try pip install, it redirects you to the uv workflow. This helps keep team habits consistent without being heavy-handed.

Next Steps: Integrating ExportAllTheThings

The immediate roadmap is incorporating ExportAllTheThings logic into the pipeline. But more importantly, the repo includes a project template designed for teams to extend the framework with client-specific customizations while keeping the core portable.

I’m curious to hear how others are handling local environment isolation or if you’ve integrated similar “standard work” guardrails into your Kinetic dev cycle.

Kevin

13 Likes

Automated Report Deployment
I also forgot to mention the ReportService utility! It automates the “manual build (zip), upload, test, repeat” dance for SSRS. We can now package and deploy custom Report Styles directly to Kinetic SaaS for immediate testing using simple CLI commands:

# Package multiple report folders into a single deployment archive
@uv run python scripts/build_kinetic_reports.py --report-name PackingSlip --output-name PackSlip_CMT_Reports.zip ./projects/PackingSlip
@uv run python scripts/build_kinetic_reports.py --append --report-name ShippingLabels --output-name PackSlip_CMT_Reports.zip ./projects/ShippingLabels

# Deploy the bundle to the Report Service
@uv run python -m kinetic_devops.report_service deploy ./bin/PackSlip_CMT_Reports.zip --report-id projects/CustomReports/PackingSlip/PackSlip,projects/CustomReports/ShippingLabels/ShipLabl

This moves our reports out of the “black box” of the UI and into Git-tracked source control, ensuring deployments are consistent across environments without opening a browser or touching Report Style Maintenance.

Note on Feature Completeness: To be clear, this is currently built for the deployment and update of existing Report Styles. It does not yet support creating brand-new Report Style records or downloading/extracting files from the server—those are still on the roadmap!

4 Likes

Hi Kevin,

I seem to be using python more and more and your framework looks really promising.

I’m just trying to set it up locally to understand how to work with it and when I set up the envirnment credentials, I get the following

image

I pasted the API key into the API (scoped) field and saved the record. I didn’t see anything to indicate that the API had pasted successfully.

When I run the script below, I get the following error.

I have updated the company name to the company I am targetting.

Please can you let me know what I have done wrong?

Thanks,

Andrew.

Error

Script

from kinetic_devops.auth import KineticConfigManager
from kinetic_devops.baq import BAQClient

Initialize authentication

config = KineticConfigManager()

Create BAQ client (handles session, headers, requests)

baq_client = BAQClient(config)

Execute a BAQ

results = baq_client.execute_baq(
baq_id=‘CUST_List’,
args={‘Company’: ‘ABC’, ‘MaxResults’: 100}
)

Process results

for customer in results[‘value’]:
print(f"{customer[‘CustomerID’]}: {customer[‘CustomerName’]}")

This is by design, it reduces the risk of your API appearing in logs, or captures.

have you initialized the environment:

.\scripts\env_init.ps1 Test2024 .venv

Also you can validate if your API is saved with the following:

uv run python -m kinetic_devops.auth list

Your main entry point from development is going to be the base client, otherwise you’re reinventing the wheel.

from .base_client import KineticBaseClient

Best analogy is that KineticBaseClient is the little black box I’ve built and shared with the community. It’s intended to handle all the communication and authentication so other functions etc. don’t have to.

kinetic-devops/kinetic_devops/baq.py at main · SteffesKBarrow/kinetic-devops

"""
kinetic_devops/baq.py

BAQ Service - Query Kinetic Business Analysis Queries.

Inherits session management from KineticBaseClient.
Provides methods to:
- Execute BAQs with filter and column selection
- Override company and plant context
- Export results to JSON
"""

# kinetic_devops/baq.py
import argparse
import json
import sys
import requests
from .base_client import KineticBaseClient

class KineticBAQService(KineticBaseClient):
    """BAQ service inheriting session management from BaseClient."""
    

The block that initiates a BAQ retrieval is:


        # 1. Initialize the service (Inherits env/user selection from Base)
        service = KineticBAQService(args.env, args.user)
        
        # 2. Execute the BAQ
        data = service.get_baq_results(
            baq_name=args.baq, 
            query_params=args.params, 
            company=args.co, 
            plant=args.plant,
            debug=args.debug
        )

Which uses the method get_baq_results from the class KineticBAQService(KineticBaseClient)


And after typing all that I’m not sure I honestly answered the question, and probably over explained, and badly, I’m still muddling through with python myself.

Best Regards,
Kevin

:hammer_and_wrench: The Roadmap: From “Deployment” to “Instant Feedback Loops”

Thanks for the positive reactions so far! It’s clear the struggle with manual overhead and the lack of formal DevOps in Kinetic is a shared trauma we’ve all lived through.

I’m currently mapping out where to take this next. While the ReportService utility is the current highlight, the broader goal is to move the “heavy lifting” of Kinetic development and testing into this Python layer. To me, we are the primary stakeholders, developers, and Architects—so I want to know what hits your specific pain points the hardest.

I am also officially inviting participation in the development. If you’ve solved a specific piece of the Kinetic API puzzle, I’d love to see those Pull Requests.

Closing the Loop: The “Stress Test” :page_facing_up:

The vision is to move beyond just “pushing” a report and into a full validation cycle. We can extend the framework to handle the entire “manual” testing process:

  1. Mass-Trigger: Pass a list of “Benchmark” IDs (e.g., 10–20 different POs with various scenarios) via the API.
  2. Poll & Retrieve: The script monitors the System Monitor and automatically downloads the PDFs to a local temp/ folder.
  3. Auto-Preview: Your default PDF viewer pops open with all results ready for a side-by-side “sanity check.”
  4. Upgrade Migration Review: Automate side-by-side comparisons of the Production (current) version vs. the Test/SIT versions to catch regression before it hits the users.

:magnifying_glass_tilted_left: “Hot Reload” for Kinetic

We can get remarkably close to a modern web-dev “Hot Reload” for SaaS ERP tasks:

  • Edit: RDLs, specialized scripts, or data mappings in VS Code/Visual Studio.
  • Execute: Run a single command (or let a file-watcher/debug hook trigger it).
  • Verify: See the rendered PDF or API response a few seconds later without ever leaving your IDE.

:elephant: Addressing the Elephant: Bold Reports

We all know the SSRS clock is ticking. While we don’t have a concrete Go-Live date for the transition to Bold Reports, I’m building this framework to be the bridge.

By moving this logic into a Python/CLI layer now, we insulate ourselves from platform shifts. When the underlying engine moves to Bold, we update the ReportService provider. Your Source of Truth stays in Git, your deployment commands stay the same, and your workflow doesn’t break—even when the ERP’s reporting engine does.

What else is on the Horizon?

  • fetch / pull: Pull existing RDLs or configurations from the server directly into a local Git repo.
  • create: Scaffold brand-new Report Style records, or even Report Data Definitions (RDD), directly via VS Code and the CLI.
  • Visual Regression: Long term, we could compare “Current” vs. “New” outputs to highlight layout shifts or data changes automatically.

The framework is built to handle much more than just reports, but the SSRS edit, zip, upload, test, repeat cycle was one of the most immediate bottlenecks for me. As project stakeholders, I’m curious: Is “Auto-Download & Open” the game-changer you need, or is there another part of the Kinetic ‘DevOps’ lifecycle that we should be prioritizing first?

If you’re interested in contributing to the repo or have ideas on the API implementation for these roadmap items, let’s talk. I’d love to see this become a community-driven standard for how we manage Kinetic environments.

These are the pain points (not solutions) that I shared with Epicor:

Single Source of Truth

Users have to manually manage what code is in each environment. There is no way to know the source of the code that is current running in production. I would guess that most customers work directly in production, aka Click-Ops. Some User Developers may have a personal versioning system where they rename objects with dates, especially if there is only one developer. Once there is more than one developer (within the company, a consultant, Professional Services, etc.), it becomes very difficult to manage where running code comes from.

Loss of Development Work

There are several ways that User Developers lose work:

Clobbered Databases

Since all work is stored in the database, any restore will wipe out work in that instance. In the cloud, there isn’t an upgrade that goes by where a refresh of Pilot wipes out work because the user dev did not export it ahead of time.

With the new Cloud Portal, we can expect more of this as users have control of restoring instances.

Corrupted layers

As Application Studio is still maturing, there are still several ways to corrupt a layer and have to restore a previous version or completely start over.

Configurator

The Product Configurator has been reported as losing work too.

Product Configurator - Numeric Box - ‘Initial Value’ is Stuck - Kinetic ERP - Epicor User Help Forum

Importing Solutions

Users who use an old Solution Workbench CAB file can accidentally clobber newer work if the CAB file wasn’t built with the latest versions.

In Directive Import, enabling the “Replace Existing Group” and not know which Groups are assigned to various directives can wipe out a lot of work with no easy way to restore.

Upgrades

Probably the biggest adjustment for companies is upgrading on a faster schedule. I was in the cloud for three years. As time goes by, users do get quicker at upgrades. But I do worry that they don’t check patches (and even upgrades) as thoroughly as they should. Automation would be very welcome here.

Testing

There are several levels of testing required for upgrades: end-to-end testing as well as some “unit” testing.

Pre/Post Processing

Prior to the upgrade, we would lock out the users (DMT set the Active Flag to False for non-Security Manager Users) and run several reports: Balance Sheet, Income Statement, Stock Status, Open A/R, Open A/P, Uninvoiced Receipts, Uninvoiced Shipments, etc. After the system is released by the cloud team, we would run the same reports and compare to make sure no financials changed. We would then reactivate some users to do a sniff test on common quote-to-cash activities.

By the way, we did this for the Pilot upgrade as well. We want to simulate the actual upgrade as closely as possible.

Development

Functions

Kinetic Functions are one of the most useful tools for the User Developer/Partner. The Promote/Demote functionality is a pain-point though. Demoting a helper library to update the code can throw errors all over the system. To just view the code, one has to demote it, unless something has changed. Having a staged version of code is an excellent idea! But that only works in REST. I don’t think a Function or Directive can call a demoted Function directly in code, but I could be wrong. How should the user developer manage this? Create a UI library and then have that library call other libraries/functions? What is the error handling story? Should the user developer cycle between A and B versions of a library for easier roll back? Should we handle 404s (the error one gets when a function is demoted, in the UI by calling a backup function?

Having a test harness for Directives/Functions that could mock some data would make upgrade testing (and development) less painful.

Auditing

For companies who require SOX compliance, it is challenging to prove who installed code, what it does, who approved it, and when. While seriously overkill, this is what CapitalOne does with DevOps to improve their compliance position: Governance in a DevOps Environment | Capital One

Tracking New Development

Documentation around new development doesn’t have a home.

  • Who asked for changes? What business problem does it solve? Is there an Issue Log to help people in the future?

  • Who else is working in this area? Will there be accidental clobbering by coworkers or consultants?

  • AI can discover problems and automatically add issues

Instance Management/Deployment

Knowing which instances are running which version of Kinetic can be very challenging. When there are multiple instances, there are (potential) steps that people have to take when copying live to another instance:

  • Mark instance as non-prod to indicate to integrations to behave differently (EDI, automatic email, Automation Studio, Data ingestion to BI systems, etc.)

  • Change Company Name/Colors

  • Configure integrations (SMTP, Automation Studio, EDI, Credit Card, etc.) to not interfere in production environments

  • Alter Task Agent to stop running some tasks

Copying all customizations (directives, BAQs, SSRS reports, etc.) is clunky and error-prone.

People have direct database manipulation for doing this, and I have a Start Up function but again, the source of settings is living in the database and can be clobbered.

Deployment

We can export all the source code in the world but if it cannot be installed in an automated fashion. Its usefulness is limited. When things go badly, being able to run a deployment to build or rebuild an instance would be very helpful.

Monitoring

Not being able to easily observe what is happening in the system is very painful. Is CPU high? Memory starved? Hung processes? Disc space adequate? Time to order more licenses?

In Development

Debugging code is a slow process. Without a debugger, capturing logs, exporting the log, and then viewing the log is a long cycle. Being able to write logs to a sink that would display in (near) real time would be an improvement.

There was a recent post on EpiUsers showing the need for better access to logs:

Application+Server Logging/Server File Download/Functions Kinetic Dashboard - Code Review / Sharing - Epicor User Help Forum

In Production

When deploying user code or post upgrade, it would be useful to highlight new errors since changes were made in code or infrastructure. Non error logging would be helpful as well, like “is MRP taking longer run?”, “Are there new non-fatal errors?”, “BAQs running longer?”, “Is License Usage the same?”, etc.

Business Process Monitoring

Utilizing a monitoring system could improve business process adherence. Instead of sending out emails for everything, track objects in invalid states and perform a single notification: parts without standard costs on open purchase orders, material not issued to jobs with part completions, sales order lines with low margins, etc.

Security

It would be wonderful to have testing data that is not actual live data. This is a bigger topic, but it is a security concern to share data with 3rd parties like Epicor and even user developers. It is a pain point but one that could be eased if objects could be exported, anonymized, and imported. The onus would be on the users to build their test data, but having tools to help build a test/dev environment would reduce this pain. The same test data would be used to check for regressions during upgrades.

Configuration Management

Leaning on Configuration as Code would remove the pain of configuration management. If a repository had a list of current Security Managers, the system could check if new ones were added without proper vetting. Keeping settings in code could test for configuration drift to make sure the software’s behavior isn’t altered without proper change management.

One might also be able to lean on external authorization providers to offer just-in-time security. For example, if a user is on vacation and someone else is doing their job, the capabilities will exist for a given period of time and then automatically removed. At one point, the Cloud Team added that as a capability for support staff to access a system for a given period of time.

This would be one way to remove the need for impersonation. If someone has access to the repository then just committing it will start a process that will add capabilities without directly granting access to those people.

AI

Can’t escape this topic! How can user developers take advantage of the new capabilities?

  • Prompts to replace wizards: Ensure formats of fields, mandatory, etc.

  • Recoder requires the developer to manually run the browser to record scripts. Tools like Playwright have an MCP that navigates pages and creates tests. They also have CLI versions that are more AI friendly.

  • While code in SaaS databases can be scanned to train AI models, we’re missing out on the on-prem code. If source were stored in a central repos like GitHub, it could be added to the corpus if companies voluntarily shared it.

5 Likes

BPM or Functions can call another function whether it is promoted or not

2 Likes

Hello Mark,

Your point about the lack of a “Single Source of Truth” is exactly the type of problem I’m trying to solve with this Python framework. I’ve been considering a robust approach using: Asymmetric Digital Signatures.

The idea is to treat every deployment like a signed software package. When the Python framework pushes a customization via REST, or instructed to manually sign a state, it will generate a signature using a Private Key. This signature isn’t just for the code; it’s a hash of the SysRowID, the SysRevID, and the GitCommit hash.

I’m planning to store these in a “Manifest” format using the SysTag table, prefixed with DevOps: for easy querying:

  • DevOps:Commit:[Short-SHA]: Identifies exactly which Git commit this code belongs to.
  • DevOps:Sig:[Base64-Signature]: The “Seal.” This is the hash of (SysRowID + SysRevID + GitCommit) signed by the Private Key.
  • DevOps:Origin:[Env_Name]: Prevents “Test” code from being flagged as “Prod” if someone tries to manually port it.

This creates a Tamper-Evident Seal stored directly in the SysTag table:

  • Authoritative Identity: We can prove the code was signed by the DevOps pipeline because only the pipeline holds the Private Key.
  • Automatic Drift Detection: If a user performs a “Click-Ops” change in Production, the SysRevID increments. Because the user can’t re-sign the object, the existing signature becomes mathematically invalid.
  • Environment Integrity: Since the SysRowID is baked into the signature, you can’t simply copy-paste “Signed” tags from a Test environment into Production to bypass the check.

By using the SysTag table as a ledger for these signatures, we can turn Kinetic into a “Verified-Only” environment. Any object without a valid signature against our Public Key is immediately flagged as “Unofficial” or “At Risk.”

I’m curious to hear your thoughts on this idea, have you’ve seen anyone attempt this level of cryptographic auditing within the Epicor ecosystem before, or if the overhead of managing the keys might be a deterrent for smaller teams?

1 Like

I have not seen anything close to this.

How do you envision when the source code looks like in source control? Completely readable JSON (no bcrypted/encoded strings)? Other?

Exactly. The ideal path is to store the Source of Truth in its native tongue—be it .cs for a BPM, .sql for a BAQ, or .py for a script.

Even if the final Epicor artifact requires those strings to be double-encoded or wrapped in a dozen layers of JSON metadata, the repo should hold the raw, ‘naked’ code. This ensures True Human Readability and Granular Diffs. If you change a single where clause, the Git diff should show exactly that line—not a 2MB wall of re-encoded JSON noise.

To make this viable, the framework has to be a two-way street (Bi-directional Synchronization):

  • Pushing (Compiling): The tool acts as a compiler. It takes the readable source code, re-encodes/nests it into the ‘transport’ format Kinetic requires, and pushes it via the API.
  • Pulling (Decompiling): When fetching from Kinetic, the tool strips away the ‘infrastructure’ JSON, decodes the strings, and writes clean, native files back to the repo.

Essentially, we treat the Git repo as the Source Code and the Kinetic environment as the Runtime. If we can’t ‘round-trip’ the code without losing formatting or introducing noise, we haven’t actually solved the DevOps problem; we’ve just moved the mess to a different folder."

  • The Idempotency Trap: In the Epicor world, Kinetic changes ModifiedOn timestamps or GUIDs just by opening a record. the “Decompiler” should probably ignore or normalize those fields to keep the Git history clean.
1 Like

You’re exactly right. If SysRowIDs or ModifiedOn are stored, then the diff won’t work. Ideally, a commit within Kinetic should push to GitHub, Azure DevOps, or another online repository directly since the browser has no access to your local instance of Git. Having source control built into the development environment will be more accessible to less technical users who might find running Python locally unapproachable. And Kinetic should always save/export/commit it in the same record/attribute order to prevent false diffs.

Some thought needs to be made for a branching strategy too. Versioning requires some decisions as well. Semantic Versioning is based on Major:Minor:Patch (12.1.100) and Kinetic is Major:Minor:Build:Revision (12.1.100.14). Are branches named after the version for historical purposes? Combine the Epicor Major and Minor to get something like: 12001.100.14, 1201.100.14-featureName?

Most importantly, any tool should be CI/CD friendly. No click-ops into Production. Asking people to manually run a tool may be viewed click-ops in disguise.

Mark,

You hit the nail on the head regarding the “Click-Ops in disguise” trap. If the end state of a “DevOps tool” is still a human sitting in a UI—even a pretty one—manually promoting a solution, we haven’t actually solved the problem; we’ve just redirected the friction.

To move beyond that in a SaaS environment, we have to stop treating the ERP as a destination for manual configuration and start treating it as Remote Infrastructure as Code (IaC).

The Python skeleton framework I envision isn’t just a script runner; it’s a Remote Deployment and Verification engine. By using the REST API to “push” changes, we effectively turn our non-production Kinetic instances into build agents and test platforms (what they should be).

Here is a strategy that bridges that gap:

1. The “Shadow” Strategy: Shadowing instead of Shipping

The goal isn’t to move code, but to move the context around the code. Instead of a blind production “push,” the framework should allow for a pre-tested Shadow Block deployment. We can inject logic into a Pilot environment that mirrors Production data:

// Injected via Kinetic-DevOps Framework: The Shadow/Canary Block
var weight = new Random().NextDouble();
if (weight < 0.10) { 
    try {
        // The 10% Canary: Execute New Logic silently
        // This is a "Mirror" call to a Pilot/Clone environment or a 'Dry Run'
        this.Lib.MyNewService.Execute(ds); 
        // Logic here would log the result to a custom 'Telemetry' table for a Deep Diff
    } catch (Exception ex) {
        // Silent failure: The shadow must NOT break the Production transaction
        Ice.Diagnostics.Log.WriteEntry("DevOps-Shadow-Failure: " + ex.Message);
    }
} 
// Execute Legacy logic as the source of truth

This allows for Zero-Downtime Verification. We prove the new logic against real-world production inputs in a “Silent Pilot” environment before we ever risk a billing error or a GL mess.

2. Intelligent JSON Diffing

Because we are on SaaS, we have to manage metadata, not just code. The framework should parse and cleanse the AppStudio and BPM JSON payloads—stripping out the system noise like SysRevID, SysRowID, and timestamps. This gives us a Deep Diff in Git that actually shows the business logic change, rather than a wall of GUID updates.

3. Atomic, Headless Lifecycle

This approach effectively “headless-es” the ERP’s logic. We keep the UI and the Data Store in Kinetic, but the Lifecycle belongs to Git.

  • Versioned Assets: Every BPM, Function, and Configuration is human readable source file regarless of ‘language’.
  • Atomic Deployment: No more 50-item .cab files. We deploy atomic, verified changes via REST, ensuring Production and Git stay 1:1.
  • Rollback: A rollback is a git revert and a re-run of the Python deployer, not a frantic manual hunt through the Solution Workbench.

This moves us from “Click-Ops” to a state where the ERP is just another target in a CI/CD pipeline.

1 Like

No offense Kevin, but the AI assisted posts are kinda unnecessary.

Just chat with us man. It’s okay to just be regular old messy you, I promise :grinning_face_with_smiling_eyes: that’s what makes you awesome

AI does a lot of things really well, but one thing it still struggles with is sounding warm and human. When something reads a little too polished it lands in that weird uncanny valley where it almost feels like we’re debating a robot.

The topic itself is actually interesting though, so I’d honestly rather just hear how you think about it.

This community has always been pretty raw and honest, sometimes to a fault :joy:… probably a good example is right now :zipper_mouth_face:

But honestly man, there’s nothing more fun than getting into an honest nerdy debate about DevOps with people who are passionate about it like @Mark_Wonsil. Give it a go, it’s a lot more satisfying than letting a chatbot do the talking.

9 Likes

True, but I was hoping be able to call either the promoted or demoted at the same time. More generically, it would be useful to call different versions based on feature flags so some users, who are testing, would get the new version while others would get the current version.

Interesting idea. I suppose one could have a function caller function which orchestrates calls to versioned function libraries. Always use it at the edge. Inferring signatures could be an interesting hurdle. What other caveats would be expected?

This is exactly what @klincecum did with his Function Runner library. But that relies on reflection, so…

Thanks for the feedback, my general rule of thumb is I use AI to express complex topics because AI expresses complex topics and ideas really well, and that’s something I often struggle with. I’ll get half way through a lengthy post and the ‘thing’ I’ve tried to express has gone through 10 iterations and the beginning doesn’t align with the middle or the end, and my post no longer convey my meaning. I use AI to bridge that gap; for me, AI assisted content is a tool for clarity, not a lack of authenticity.

While I appreciate the sentiment, and feel that it’s coming from a good place, that has not been my lived experience, being ‘regular old messy me’ usually comes at a high price, e.g. being dismissed as having nothing of value to contribute because I’m struggling to articulate my ideas to my target audience.

I will try to lean on AI a little less, but AI-assisted content is an accessibility tool for me and I would appreciate your consideration.

7 Likes

I lean heavily on AI in daily life and can understand the usage of it for an accessibility tool, my hands hurt, I don’t want to keep typing too much longer and AI represents a leap forward in the ability to “voice to code” - it’s previously been an exercise in futility utilizing voice programming tools - I have a fancy Philips speech Mike, dragon software, etc.

It would benefit your use case to utilize the “provide your own training” pieces of AI - which goes by many names such as “spaces”, “gems”, “GPTs”, “studio” etc. - provide it some markdown documentation guiding it as to your grammar and diction, enable it to speak in your “voice” - but with the refinements you desire. That will make it come off as much less abrasive (or even nearly transparent as a translator should strive to be)

5 Likes

Thanks for the insight Kevin that makes sense. Like @GabeFranco said maybe telling the Ai to be a little less formal etc might help bridge the Uncanney Valey that sometimes bleeds through. But I appreciate the sentiment and now understand better why you use it.

-Cheers

3 Likes

And who isnt’t…

1 Like