Category: Uncategorized

  • The Engine Legacy: From Asteroids to rpgCore

    The Vision

    Learning to build the foundation before the house. The transition from a discrete project to a generalized toolkit.

    Why I built an engine to solve my own friction

    The shift from RogueAsteroid (Specific) to rpgCore (General) was born out of a realization: I was re-writing movement, collision, and state logic for every small game idea.

    The Systemic Fix

    I abstracted these common patterns into rpgCore, a modular toolkit built on a simplified Entity-Component-System (ECS) pattern in Python.

    The ROI

    This proves I don’t just “code”—I build foundations that reduce future technical debt. By standardizing the logic-component library, I accelerated my own prototyping velocity by 60%, allowing me to focus on mechanics rather than plumbing.

  • The Hybrid Engine: Rust Performance, Python Agility

    The Vision

    Speed is everything in DeFi, but rigidity is a death sentence. A pure-Rust bot is fast but a nightmare to iterate on. A pure-Python bot is agile but loses the race to the block.

    The Systemic Fix

    I engineered a Hybrid FFI Bridge.
    1. The Rust Core: Handles the “Hard Work”—WebSocket connections, memory-safe transaction signing, and packet serialization.
    2. The Python Strategy: Communicates with the Rust core via a lightweight interface, allowing me to swap trading logic without re-compiling the binary.

    The ROI

    By decoupling the Execution from the Intelligence, I achieved the best of both worlds: the safety of a compiled systems language and the “Fresh Context” agility of a scripting language.

  • The Insight Lens: Visualizing the Dialer Blindspot

    The Vision

    In high-volume digital marketing, the dialer is the engine. But most managers are driving blind, relying on static reports that are out-of-date by the time they are read.

    The Friction (Manual Patch)

    The standard workflow involved manual CSV exports, Excel “voodoo,” and re-uploading leads. This created a latency gap—by the time we realized a lead source was underperforming, we had already burned the budget.

    The Systemic Fix (The Insight Lens)

    I built a custom observation layer injected directly into the browser via a Chrome Extension.
    * Real-Time Visibility: Highlighting lead-burn rates in the UI using color-coded CSS injections.
    * Automated Payload: Using Python to bridge the gap between our CRM and the Dialer API, ensuring leads are loaded and categorized without human intervention.

    Skeptical Auditor Note: Automation isn’t just about saving time; it’s about Eliminating Variance. By removing the human element from lead-loading, we ensured 100% data integrity for our analytics.

  • PhantomArbiter: Orchestrating the Hybrid Engine

    The Original Vision

    PhantomArbiter was built to execute high-speed arbitrage opportunities on the Solana blockchain.

    What Went Wrong / The Pivot

    The Biggest Win (Systemic Synergy):
    The Hybrid Bridge: Successfully building a Memory-Safe / High-Velocity Execution Engine in Rust that consumed a Dynamic / Hot-Swappable Strategy Layer in Python. This reduced execution latency by ~90% compared to a pure-Python implementation while maintaining the “KISS” principle for the trading logic. It proved the ability to orchestrate complex, multi-language systems under high-pressure (low-latency) constraints.

    The Most Frustrating Blocker (Infrastructure Overhead):
    The Data Feed Wall: The “Manual Patch” of constantly adjusting to RPC node rate-limits and WebSocket desyncs on the Solana network.

    Realizing that to stay competitive, the project required a massive investment in private RPC infrastructure—shifting the ROI from “Clever Code” to “Capital Intensive Hardware.” A Strategic Decision was made to sunset the bot and harvest the Orchestration Patterns for the broader IT automation suite.

    The Harvest (Lessons & Reusable Tech)

    Harvested the Orchestration Patterns. The ability to bridge dynamic Python logic with bare-metal Rust performance has been extracted and assimilated into other internal tooling.

  • From Pong AI to Play Store: How a Childhood Hobby Became a Rust Game Engine

    There is a specific kind of magic that occurs exactly once in a coder’s life. For me, it was the moment I realized my “Pong” paddle wasn’t moving because I told it to—it was moving because it knew where the ball was going. That first taste of artificial agency is the high we’ve been chasing ever since.

    Fast forward through years of professional IT consulting and architectural deep-dives, and that childhood impulse finally found its form in what eventually became OperatorGame. But the path wasn’t a straight line; it was a series of transmutations.

    The rpgCore Era: The Python Foundation

    Before the first line of Rust was ever committed, there was rpgCore. Built in Python 3.12 with Pygame and a heavy dose of Pydantic validation, rpgCore was my laboratory. It was an over-engineered marvel: 1,122 passing tests, a custom ECS (Entity-Component-System) architecture, and a procedural generation engine that could churn out dungeons for days.

    Technically, it was solid engineering. But as a game, it had a ceiling. Moving a simulation of that complexity to mobile devices with a custom physics-integrated genetic engine meant hitting the limits of the Python runtime. I had created a perfect brain, but I needed a body that could run at the speed of logic.

    The Great Transmutation: Rebuilding in Rust

    I didn’t “port” the game to Rust. You don’t port a soul; you transmute it.

    The decision to move to Rust 2021 was driven by three non-negotiables:
    1. Performance Multiplier: I wanted slimes to steer using real-time physics behavioral maps without the fan on my Moto G turning into a jet engine.
    2. Type Safety as a Design Tool: In a game about genetics and cymatic resonance, a “type mismatch” isn’t a bug—it’s a violation of the laws of the universe.
    3. WASM as an Anchor: The goal was for the same code to boot in the browser, on Android, and on the desktop.

    This transition was managed through what I call Spec-First Sovereignty. We used SDDs (Specification-Driven Development) as the portability layer. The CONSTITUTION.md and SPEC.md were locked first, ensuring the design DNA survived the runtime swap.

    Where We Stand: Sprint 9 and Beyond

    Right now, OperatorGame has crossed its most significant threshold. We have completed Sprint 9 (The Bio-Manifest), locking in 145 core unit tests. The game is live in Internal Testing on the Google Play Store, running native cdylib with a high-fidelity egui dashboard.

    The “secret sauce”? Wall-clock async timers. Missions resolve based on absolute time, meaning the simulation continues even when your phone is in your pocket. It’s not just a game; it’s a persistent, resonant world.

    The Roadmap: Stage Ladder & Elder Passives

    The completion of the technical floor is just the beginning.

    On the immediate horizon:
    * Stage Ladder: Implementing the vertical progression for competitive resonance.
    * Elder Passives: Deepening the genetic advantage system for long-term survival.
    * Public Release: Moving beyond internal testing into the wild.

    This project is personal, honest, and built under the weight of 52 ADRs (Architectural Decision Records). It is a testament to the belief that high-quality engineering belongs in games just as much as it does in finance or automation.

    View the Code on GitHub


    PyPro
    Senior Lead Architect, RFD IT Services

  • Solana Arbitrage: What I Learned From 400 Trades (And $4 in Losses)

    I built PhantomArbiter to detect and execute arbitrage on Solana. After 400 live trades across 3 months, I lost $4. Here’s what went wrong—and why the technology actually worked.

    The Setup

    System: Detect price divergence across Solana DEXes (Jupiter, Raydium, Orca, Meteora). Execute buy-low / sell-high atomically via JITO bundles (MEV protection).

    Capital: $500 initial. Modest, but real money.

    Timeline: 3 months, 400 completed trades.

    Result: -$4.23 net.

    Why It Failed (Technically)

    RPC Latency Kills Speed

    Solana blocks come every 400ms. My system sees an arbitrage opportunity, but by the time I submit the bundle, 2-3 blocks have passed. Price has already moved. The spread that looked profitable is now break-even or negative.

    The gap: Local detection (10ms) → RPC call (50ms) → Signature submission (100ms) → Next block (400ms) = Too slow.

    MEV bots use validator infrastructure (direct connections, guaranteed inclusion). I used public RPC. Not competitive.

    Network Congestion

    Solana’s network is unpredictable. Sometimes transactions confirm in 1 block. Sometimes 10. When I’m arbitraging with thin margins (1-2%), network variance turns winning trades into losing ones.

    Example: My math said I’d make $2.50 on a trade. By the time execution happened, slippage ate the profit.

    JITO Bundle Fees

    JITO bundles cost ~0.00005 SOL per transaction. At SOL prices, that’s $0.002-0.004 per trade. Multiply by 400 trades = ~$1-1.50 in fees alone.

    The math: arbitrage spread (before costs) = average $1.50 per trade. Bundle fee = $0.003. Profit margin = $1.497 (99.8% of spread captured).

    But in practice, spreads are tighter than advertised. After slippage, fees, MEV tax, the $1.50 evaporates.

    Why It Actually Worked

    The system architecture was sound. 400 trades executed without crashes:
    – Zero transaction failures (JITO bundles had 100% confirmation rate)
    – Zero contract bugs (all interactions executed correctly)
    – Zero memory leaks (Python managed 24/7 uptime)
    – Stable WebSocket feeds (price data flowed continuously)

    The software worked perfectly. The economic model didn’t.

    The Real Lesson

    Arbitrage isn’t actually available to retail traders on Solana right now. Not because the code is broken. Because:

    1. Speed advantage: Professional bots have better infrastructure
    2. Costs: They negotiate lower fees
    3. Capital: They can absorb slippage across larger positions
    4. Timing: They see the opportunity faster

    What Would It Take to Profitably Trade?

    Minimum Requirements

    1. Dedicated validator connection (~$500/month)
    2. Larger capital ($5K minimum to absorb slippage)
    3. Tighter spread detection (identify smaller, less-discovered arbs)
    4. Alternative strategies (not just cross-DEX: liquidity farming, LP rebalancing)

    The Math at Scale

    With $50K capital and dedicated infrastructure:
    – Trade size: $10K per opportunity
    – Blended spread: 0.5% (after slippage)
    – Opportunities per day: 10-15
    – Profit per trade: $50
    – Monthly revenue: $10K-15K
    – Monthly costs: $1K (infrastructure)
    Net monthly: $9K-14K

    This is viable. I just didn’t have the capital or infrastructure to reach this threshold.

    Why I’m Shipping This Anyway

    The code is production-grade:
    – 26+ commits showing real development
    – Handles 24/7 operation without crashes
    – Scales to thousands of concurrent price feeds
    – Modular strategy system (swap in new strategies without rewriting)

    Even though it lost money, this is the exact system you’d scale up with more capital. It’s not a learning project. It’s a real trading system that lost money due to economic factors, not technical failure.

    The Deeper Truth

    Most “trading systems” are either:
    1. Backtested and overfitted (work in the past, fail live)
    2. Profitable in theory but brittle in practice
    3. Scams

    PhantomArbiter is none of these. It:
    – Trades in real markets
    – Handles real network conditions
    – Survives real slippage
    – Loses money honestly

    That’s actually rare.

    What’s Next?

    I could:
    1. Raise capital and run it at scale ($50K+)
    2. Switch strategies (liquidity provision instead of arbitrage)
    3. Study other chains (maybe Ethereum has better arbitrage)
    4. Focus on lessons (document what I learned, sell expertise)

    For now, it’s educational. PhantomArbiter proves I can build production trading infrastructure. The next iteration will have better capital allocation and strategy selection.

    Takeaway for Other Builders

    If you’re thinking about trading bots:
    – Architecture matters. Get it right first.
    – Economic viability matters more. Get the math right.
    – Live trading is the only real test. Backtests lie.
    – Small losses are education. Large losses are disasters. Know your risk.
    – “Lost $4 on 400 trades” is actually a pretty good outcome for a retail trader.

    I’ll be back, with better math.

  • Hello, beautiful world!

    Welcome to my WordPress. This is my first post. I figured this a fitting title as I am a Hobbyist Computer Programmer, Video Game Designer, and all-around “hacker.”

  • Building rpgCore: Cross-Language Architecture for Multi-Genre Games

    rpgCore taught me that architectural clarity is worth more than raw performance when prototyping complex systems. Here’s how we bridged Python, C#, and Rust into a cohesive game engine.

    The Problem Statement

    Most game engines are monolithic. Godot is great at rendering, but scripting in C# alone feels limiting. I wanted to explore: what if the engine was polyglot by design?

    • Python for high-level game logic (fast to prototype, mature libraries)
    • C#/Godot for rendering and visual debugging
    • Rust for performance-critical calculations

    The catch: making three languages cooperate without spaghetti code.

    Entity-Component-System Architecture

    ECS is the right abstraction here. Instead of:

    “`python
    class Player(GameObject):
    def move(self):
    pass

    def take_damage(self):
        pass
    
    def interact(self):
        pass
    

    “`

    We define entities as pure data:

    python
    @dataclass
    class Entity:
    id: str
    components: Dict[str, Component]

    And behavior emerges from systems operating on components:

    python
    class MovementSystem:
    def update(self, entities: List[Entity]):
    for entity in entities:
    if entity.has(Transform) and entity.has(Velocity):
    transform = entity.get(Transform)
    velocity = entity.get(Velocity)
    transform.position += velocity.value * dt

    Why this matters: A player-controlled character needs Transform, Velocity, Input, Sprite, Health. An NPC needs Transform, Velocity, Sprite, Health, AI. A projectile needs Transform, Velocity, Sprite, Damage. Zero code duplication—just different component combinations.

    Cross-Language Communication

    Python core ↔️ C# frontend via socket IPC. Protocol Buffers for serialization.

    Python server:
    python
    async def game_loop():
    while running:
    # Update physics, AI, game state
    await world.update(dt)
    # Serialize entity state
    message = serialize_world_state(world)
    # Send to Godot
    await socket.send(message)

    C# client (Godot):
    csharp
    while (socket.connected) {
    byte[] data = socket.Receive();
    var state = WorldState.Parser.ParseFrom(data);
    // Update visual representation
    UpdateVisuals(state);
    }

    This separation is powerful:
    – Change Python game logic without restarting Godot editor
    – Godot dev can tweak visuals independently
    – Easy to run game server headless (no rendering)

    Testing & 100% Pass Rate

    Every system has unit tests. Game logic lives in pure Python:

    “`python
    def test_movement():
    world = World()
    entity = world.create_entity()
    entity.add(Transform(position=(0, 0)))
    entity.add(Velocity(vector=(1, 0)))

    movement_system = MovementSystem()
    movement_system.update([entity], dt=1.0)
    
    assert entity.get(Transform).position == (1, 0)
    

    “`

    No Godot, no rendering, no I/O. Pure logic. 31 tests, 31 passing.

    This matters. I can refactor with confidence. The tests guarantee I haven’t broken core mechanics.

    Lessons Learned

    1. Architecture first, performance later.
    Socket IPC has overhead. But the clarity of separating C# rendering from Python logic pays dividends. We’re not simulating 1000 entities or running at 4K—the game runs silky smooth at 60 FPS.

    2. Constraints breed creativity.
    “Each language must do one job well” forced us to think clearly. Python isn’t trying to render. C# isn’t doing AI training. Result: code is easier to reason about.

    3. Tests compound in value.
    The 31 tests are not overhead. They’re insurance. As complexity grew, they prevented catastrophic regressions. By the end, I was confident shipping without fear.

    4. Documentation is not optional.
    BUILD_PIPELINE.md is 50 lines of “here’s exactly how to build this.” DELIVERABLES.md lists every component and system. This matters for future me (or a team) picking up the code.

    The Takeaway

    This architecture doesn’t scale to AAA game development (100+ engineers). But for prototyping ambitious game ideas with high code quality? It’s hard to beat.

    ECS decouples everything. Cross-language boundaries enforce clean abstractions. Testing Python logic in isolation removes a huge source of bugs.

    Would I use this for a production shipped game? Maybe not—the overhead might matter at scale. But for proving that clean architecture works in game development? Absolutely.

  • Automating 60% of My Job: Data Pipeline & Lead Enrichment Lessons

    When I started as a Data Administrator, I inherited a mess: spreadsheets, manual data entry, no integration between systems. So I built a pipeline to automate 60% of my responsibilities. Here’s what I learned.

    The Starting State

    The problem: Contact center operations drowning in manual work.

    • Lead import: CSV files dropped into email → manually copy into CRM → validate → assign
    • Data cleaning: Leads with bad emails, duplicates, missing fields → hours per week in cleanup
    • Enrichment: Missing company info, job titles → manual lookups
    • Integration: Three separate systems with no data sync (CRM, email, call logs)
    • Reporting: Hand-assembled spreadsheets for weekly management review

    Time spent: 30 hours per week on data plumbing. 10 hours on actual admin work.

    The Solution

    I built a pipeline:

    CSV Upload → Validation → Dedupe → Enrichment → CRM Sync → Auto-Reporting

    Step 1: Automated Validation

    “`python
    def validate_lead(lead: Dict) -> Tuple[bool, List[str]]:
    errors = []

    # Email format
    if not re.match(r'^[\w\.-]+@[\w\.-]+\.\w+$', lead['email']):
        errors.append("Invalid email format")
    
    # Phone format (US)
    if lead['phone'] and not re.match(r'^\+?1?\d{10}$', lead['phone']):
        errors.append("Invalid phone format")
    
    # Required fields
    for field in ['first_name', 'last_name', 'email']:
        if not lead.get(field, '').strip():
            errors.append(f"Missing {field}")
    
    return len(errors) == 0, errors
    

    “`

    Old way: Open each spreadsheet, manually check. Takes 2 hours for 500 leads.

    New way: Script validates 500 leads in 30 seconds. Generates error report.

    Step 2: Deduplication

    Tricky because duplicates aren’t perfect. Someone might appear as:
    – “John Smith” vs “Jon Smith”
    – “[email protected]” vs “[email protected]
    – “John Smith, Acme Inc” vs “John Smith, Acme”

    I used fuzzy matching (Levenshtein distance):

    “`python
    from difflib import SequenceMatcher

    def find_duplicates(leads: List[Dict], threshold=0.85) -> List[Tuple[int, int]]:
    duplicates = []

    for i, lead1 in enumerate(leads):
        for j, lead2 in enumerate(leads[i+1:], i+1):
            score = SequenceMatcher(
                None,
                lead1['email'].lower(),
                lead2['email'].lower()
            ).ratio()
    
            if score > threshold:
                duplicates.append((i, j))
    
    return duplicates
    

    “`

    Result: Caught 12% of leads were duplicates. Saved hours of manual review.

    Step 3: Data Enrichment

    Missing company info? Use the email domain:

    “`python
    def enrich_lead(lead: Dict) -> Dict:
    if not lead.get(‘company’):
    email = lead[’email’]
    domain = email.split(‘@’)[1]
    lead[‘company’] = domain.split(‘.’)[0].title()

    return lead
    

    “`

    [email protected] → auto-populate company: "Techcorp"

    Not perfect, but better than blank. Reduced manual enrichment by 70%.

    Step 4: CRM Integration

    Instead of manual entry into the CRM, script does it:

    “`python
    import salesforce

    def sync_to_crm(validated_leads: List[Dict]):
    for lead in validated_leads:
    crm.Lead.create(
    first_name=lead[‘first_name’],
    last_name=lead[‘last_name’],
    email=lead[’email’],
    company=lead[‘company’],
    source=’csv_import’,
    imported_at=datetime.now()
    )
    “`

    Before: 2 hours per batch, manual entry in CRM UI, error-prone
    After: 5 minutes, automated, audited

    Step 5: Automated Reporting

    Instead of assembling spreadsheets every Friday:

    “`python
    import smtplib

    def generate_weekly_report():
    leads_imported = get_import_count(last_7_days)
    duplicates_caught = get_duplicate_count(last_7_days)
    enrichment_rate = get_enrichment_rate()

    report = f"""
    Weekly Data Report
    ---
    Leads imported: {leads_imported}
    Duplicates prevented: {duplicates_caught}
    Enrichment rate: {enrichment_rate}%
    """
    
    email_report(report)
    

    “`

    Before: 1 hour collecting and formatting data
    After: Automatic email every Friday at 9am

    The Numbers

    Time investment: ~60 hours building the pipeline

    Time saved (first year):
    – Lead validation: 4 hours/week → 30 min/week (3.5 hr/week saved)
    – Dedup: 3 hours/week → 30 min/week (2.5 hr/week saved)
    – Enrichment: 2 hours/week → 20 min/week (1.8 hr/week saved)
    – CRM entry: 5 hours/week → 30 min/week (4.5 hr/week saved)
    – Reporting: 1 hour/week → 5 min/week (55 min/week saved)

    Total: 12.35 hours/week saved

    Over a year: 12.35 × 52 = 642 hours saved

    ROI: 642 hours saved / 60 hours invested = 10.7x payback

    Plus, the system gets better over time. Fuzzy match thresholds improve. New enrichment rules get added. Each improvement compounds.

    The Lessons

    1. Automation ROI is Massive When Repetitive Work is Manual

    If a process happens weekly and takes >30 minutes, it’s worth automating. The break-even is usually 6-12 weeks.

    2. Data Quality is Free, Not a Feature

    By building validation into the pipeline, data quality improved automatically. The CRM now has cleaner leads than when I was manually entering them (because I was tired and made typos).

    3. Start Small, Iterate

    I didn’t build the perfect pipeline day 1. Started with CSV validation. Added dedup when that worked. Added enrichment when I had bandwidth. Automated reporting last.

    Each step was independently valuable. If I’d tried to build everything at once, I’d have shipped nothing.

    4. Don’t Optimize Prematurely

    My enrichment is naive (split email domain, title case it). It’s not AI or perfect. But it’s 70% accurate for 0% of the effort. Perfect is the enemy of shipped.

    5. Keep Humans in the Loop

    The pipeline generates suggestions (potential duplicates, enrichment guesses). A human still reviews high-risk items. Automation isn’t about removing humans—it’s about removing drudgery.

    6. Document and Maintain

    This pipeline requires maintenance. New CRM fields need new mappings. Enrichment rules need tweaking. Without documentation, the next person maintaining it would be lost.

    I spent 2 hours writing clear docs. Paid for itself immediately when I had to debug 6 months later.

    The Unexpected Benefit

    By removing manual work, I had time to think about the underlying process. Some questions emerged:

    • “Why do we import leads so frequently?”
    • “Why aren’t we feeding call logs back into the CRM?”
    • “Why isn’t qualification happening at import time?”

    These questions led to bigger process improvements. Automation isn’t just about efficiency—it’s about visibility into what’s actually happening.

    Lessons for Other People in Admin/Operations

    1. Learn Python or scripting. You don’t need to be a software engineer. You need to be dangerous enough to glue systems together.
    2. Start with your own job. You know the pain points intimately. Automating your own work is easier than automating someone else’s.
    3. Think in terms of pipelines. Data flows through your system. Each step is an opportunity to validate, enrich, or transform.
    4. Measure everything. If you can’t prove automation saved time, you’ll never justify it to management.

    What’s Next?

    With 60% of my job automated, I had capacity for:
    – Deeper analysis (why are some leads converting better?)
    – Process improvement (can we shorten the sales cycle?)
    – Tool building (what other tools would salespeople pay for?)

    This is the real benefit of automation. Not “do less work.” But “do more interesting work.”