When Responsibility and Power Collide: Lessons from the RubyGems Crisis

The Ruby community experienced significant turbulence in September 2025 when Ruby Central forcibly took control of the RubyGems GitHub organization, removing long-standing maintainers without warning. As someone who has worked extensively on RubyGems security - first independently and later with Mend.io - protecting our ecosystem from supply chain attacks and handling vulnerability reports, I found myself caught between understanding the business necessities and being deeply disappointed by the execution.

I should clarify: I'm not affiliated with Ruby Central, but I've been working behind the scenes to keep RubyGems secure for years. Most people don't realize the constant vigilance required, including assessing security reports, investigating suspicious packages, and coordinating responses to threats. The RubyGems blog has documented some of these efforts, but much of this work happens quietly, every single day.

The Supply Chain Security Context

Recent events in the software world have made supply chain security yet again impossible to ignore. We've seen attacks on npm, PyPI, and other package registries that compromised thousands of systems. We've also seen attacks on RubyGems. These aren't theoretical risks-they're active, ongoing threats that require constant attention.

Having personally taken over a few gems, I understand the complexity involved. These transfers required months of legal documentation, clear agreements, and, most importantly, communication and consent from all parties. What seems like bureaucratic overhead is essential risk management when dealing with infrastructure that thousands of companies depend on.

Yet while security is critical, it cannot become a blanket justification for rushed decisions and broken processes. True security requires not just control, but also the trust and cooperation of those who understand the systems best - and that trust, once shattered through poor execution, is far harder to rebuild than any technical vulnerability is to patch.

WHY vs HOW: The Critical Distinction

The WHY behind Ruby Central's actions - securing critical infrastructure, establishing clear legal frameworks, and protecting against supply chain attacks and legal risks - addresses real concerns, though questions remain about whether these fully explain the specific decisions made. As Ruby Central stated, they have a "fiduciary duty to safeguard the supply chain." When enterprises require SBOMs (Software Bill of Materials), when security audits demand transparent ownership chains, when legal liability is on the line, having unregulated access to production systems creates genuine risk.

The HOW - removing access without warning, failing to communicate, breaking trust with maintainers who had served for years - was catastrophic. As Ellen Dash documented and André Arko confirmed, maintainers learned about their removal through GitHub notifications, not communication from Ruby Central. The same objectives could have been achieved with proper planning, effective communication, open discussion, and respect for the individuals who had dedicated their lives to this ecosystem.

The Missing Human Element

One of the biggest failures has been Ruby Central's absence from the day-to-day community. Apart from Marty Haught, I rarely interact with Ruby Central leadership. They're not in the trenches with us (in the areas where I operate), they don't participate in the daily work, and they don't build relationships with maintainers. This disconnect created a situation where crucial decisions were made by people who didn't truly understand the human cost.

It's crucial to understand that the RubyGems GitHub organization contains far more than just the repositories Ruby Central funds or operates. While Ruby Central is responsible for RubyGems.org (the service) and funds work on core projects like Bundler and the RubyGems library, the organization also houses numerous other repositories - both public and private - that have no direct relationship with Ruby Central. By seizing control of the entire GitHub organization, Ruby Central took possession of projects that may have been beyond their legal or ethical purview - a concerning overreach that warrants scrutiny.

When you remove half of the on-call team members without warning, you're not improving security - you're creating operational risk. When you alienate the people who know the system inside and out, you're not protecting the ecosystem - you're endangering it.

What breaks my heart is seeing talented and dedicated contributors walk away. The domain knowledge these maintainers possess took years to build. The collaborative culture, the shared understanding, the trust between team members - these intangible assets are now damaged or lost. You can't just hire new engineers and expect the same level of expertise and dedication overnight.

Governance vs Control: Finding the Balance

The fundamental tension remains: who should control critical services that entire ecosystems depend on? After dealing with similar transitions myself, I've learned that governance and control don't have to be in opposition. A Ruby Central board member's perspective revealed the pressures they faced, including potential loss of funding, but also acknowledged that execution was poor.

A more surgical approach would have been to transfer only the critical repositories - RubyGems.org, perhaps the core RubyGems and Bundler repos - to Ruby Central's direct control while leaving the broader organization structure intact. This would have addressed their stated security concerns without overreaching into unrelated projects. The fate of the dozens of other repositories should have been discussed openly with the community, not decided unilaterally under time pressure.

  • Governance is about direction, goals, and community involvement in decision-making
  • Control is about legal and operational boundaries - who bears responsibility when things go wrong

An organization can maintain control for legal reasons while still having transparent, community-driven governance. But this requires:

  1. Clear agreements established in advance
  2. Transparent communication throughout the process
  3. Respect for existing contributors
  4. Understanding that trust is earned, not demanded

Ruby Central had understandable concerns about security and liability. But their execution turned a necessary evolution into a crisis. The same changes, implemented over weeks or months with proper communication and respect for maintainers, might have been accepted as unfortunate but necessary.

Moving Forward: Uncomfortable Truths

We need to acknowledge several realities:

  1. Critical infrastructure needs formal governance - The era of informal arrangements for mission-critical services is ending. This transition must be handled with care.

  2. Legal responsibility requires appropriate control - If Ruby Central faces lawsuits or liability, it needs the ability to manage that risk. This is non-negotiable in today's threat landscape.

  3. Security theater isn't security - Real security comes from experienced teams with deep system knowledge, not from corporate control structures.

  4. Community contribution and corporate control can coexist - But only with clear agreements, transparent processes, and mutual respect.

  5. Ruby Central needs to be present - Leadership must engage with the community, understand the daily work, and build relationships with contributors.

  6. Decisions made under extreme time pressure are rarely optimal - Critical infrastructure changes need careful planning, not panic-driven actions. If there truly was a 24-hour deadline - whether from external pressure or internal mismanagement - it reveals systemic governance problems that enabled this crisis

My Path Forward: Why I'm Staying

Despite everything that has transpired, I've decided to continue my work with RubyGems. I'll continue to do what I've been doing for years: hunting for malicious and spam packages, assessing security reports, and developing new ways to protect our community.

It would be hypocritical of me to abandon ship now. Throughout this article, I've argued that those who bear responsibility should maintain control. I've emphasized that real security comes from people with deep system knowledge, not from organizational structures. How could I make these arguments and then walk away from the very work I claim is so critical?

The Ruby community deserves continuity and stability, especially during this turbulent period. The malicious actors trying to compromise our supply chain won't pause their attacks because of organizational drama. This isn't about endorsing how things were handled - I've been clear about my disappointment. It's about recognizing that the Ruby ecosystem is bigger than any individual or organization.

A Personal Reflection

As someone who deals with enterprise Ruby software and security requirements daily, I understand the types of pressures Ruby Central claims to have faced. Supply chain attacks are real. Legal liabilities are real. The need for formal structures is real.

But as someone who has worked to protect RubyGems from these very threats, I know that security comes from people, not policies. It comes from maintainers who care enough to respond at midnight, from contributors who spot anomalies because they know the system intimately, from a community that watches out for each other.

The Ruby community has lost more than just access permissions. We've lost people who cared deeply, who worked tirelessly-often without recognition or compensation - to keep our ecosystem secure. While I'll continue my security work because I believe in protecting our community, I mourn the loss of colleagues who deserved better.

The question isn't whether critical infrastructure needs proper governance - it clearly does. The question is whether we can implement these necessary changes while preserving the human relationships and domain expertise that actually keep our systems secure. More importantly, we must ask whether the current governance structure adequately protects against undue influence from any single external source. Based on recent events, we have a lot of work ahead of us to build a system that is both secure and truly independent.


For more context on these events, see: Ellen Dash's account, André Arko's farewell, Ruby Central's official statement, and a board member's perspective

WaterDrop Meets Ruby’s Async Ecosystem: Lightweight Concurrency Done Right

Ruby developers have faced an uncomfortable truth for years: when you need to talk to external systems like Kafka, you're going to block. Sure, you could reach for heavyweight solutions like EventMachine, Celluloid, or spawn additional threads, but each comes with its own complexity tax.

EventMachine forces you into callback hell. Threading introduces race conditions and memory overhead. Meanwhile, other ecosystems had elegant solutions: Go's goroutines, Node.js's event loops, and Python's asyncio.

Ruby felt clunky for high-performance I/O-bound applications.

Enter the Async Gem

Samuel Williams' async gem brought something revolutionary to Ruby: lightweight concurrency that actually feels like Ruby. No callbacks. No complex threading primitives. Just fibers.

require 'async'

Async do |task|
  # These run concurrently
  task1 = task.async { fetch_user_data }
  task2 = task.async { fetch_order_data }
  task3 = task.async { fetch_metrics_data }

  [task1, task2, task3].each(&:wait)
end

The genius is in the underlying architecture. When an I/O operation would normally block, the fiber automatically yields control to other fibers – no manual coordination is required.

Why Lightweight Concurrency Matters

Traditional threading and evented architectures are heavy. Threads consume a significant amount of memory (1MB stack per thread) and come with complex synchronization requirements. Event loops force you to restructure your entire programming model.

Fibers are lightweight:

  • Memory efficient: Kilobytes instead of megabytes
  • No synchronization complexity: Cooperative scheduling
  • Familiar programming model: Looks like regular Ruby code
  • Automatic yielding: Runtime handles I/O coordination

WaterDrop: Built for Async

Starting with the 2.8.7 release, every #produce_sync and #produce_many_sync operation in WaterDrop automatically yields during Kafka I/O. You don't configure it. It just works:

require 'async'
require 'waterdrop'

producer = WaterDrop::Producer.new do |config|
  config.kafka = { 'bootstrap.servers': 'localhost:9092' }
end

Async do |task|
  # These run truly concurrently
  user_events = task.async do
    100.times do |i|
      producer.produce_sync(
        topic: 'user_events',
        payload: { user_id: i, action: 'login' }.to_json
      )
    end
  end

  # This also runs concurrently during Kafka I/O
  metrics_task = task.async do
    collect_application_metrics
  end

  [user_events, metrics_task].each(&:wait)
end

Real Performance Impact

Performance Note: These benchmarks show single-message synchronous production (produce_sync) for clarity. WaterDrop also supports batch production (produce_many_sync), async dispatching (produce_async), and promise-based workflows. When combined with fibers, these methods can achieve much higher throughput than shown here.

I benchmarked a Rails application processing 10,000 Kafka messages across various concurrency patterns:

Sequential processing (baseline):

  • Total time: 62.7 seconds
  • Throughput: 160 messages/second
  • Memory overhead: Baseline

Single fiber (no concurrency):

  • Total time: 63.2 seconds
  • Throughput: 158 messages/second
  • Improvement: 0.99x - No benefit without actual concurrency

Real-world scenario (3 concurrent event streams):

  • Total time: 23.8 seconds
  • Throughput: 420 messages/second
  • Improvement: 2.6x - What most applications will see in production

Optimized fiber concurrency (controlled batching):

  • Total time: 12.6 seconds
  • Throughput: 796 messages/second
  • Improvement: 5.0x - Peak performance with proper structure

Multiple producers (traditional parallelism):

  • Total time: 15.2 seconds
  • Throughput: 659 messages/second
  • Improvement: 4.1x - Good, but uses more memory than fibers

A single producer using fibers outperforms multiple producer instances (5.0x vs 4.1x) while using less memory and resources. This isn't about making individual operations faster - it's about enabling Ruby to handle concurrent I/O elegantly and efficiently.

Transparent Integration

What makes WaterDrop's async integration cool is that it's completely transparent:

# This code works with or without async
producer.produce_sync(
  topic: 'events',
  payload: data.to_json
)

Running in a fiber scheduler? It yields during I/O. Running traditionally? It blocks normally. No configuration. No special methods.

The Transactional Reality

Transactions have limitations. Multiple transactions from one producer remain sequential due to the transactional.id design:

# These transactions will block each other
Async do |task|
  task.async { producer.transaction { ... } }
  task.async { producer.transaction { ... } } # Waits for first
end

But: transactions still yield during I/O, allowing other fibers doing different work to continue. For concurrent transactions, use separate producers.

Real-World Example

class EventProcessor
  def process_user_activity(sessions)
    Async do |task|
      # Process different types concurrently
      login_task = task.async { process_logins(sessions) }
      activity_task = task.async { process_activity(sessions) }

      # Analytics runs during Kafka I/O
      analytics_task = task.async { update_analytics(sessions) }

      [login_task, activity_task, analytics_task].each(&:wait)
    end
  end

  private

  def process_logins(sessions)
    sessions.each do |session|
      producer.produce_sync(
        topic: 'user_logins',
        payload: session.to_json
      )
    end
  end
end

Why This Matters

WaterDrop's async integration proves Ruby can compete in high-performance I/O scenarios without sacrificing elegance. Combined with Samuel's broader ecosystem (async-http, async-postgres, falcon), you get a complete stack for building high-performance Ruby applications.

Try wrapping any I/O-heavy operations in Async do |task| blocks. Whether it's API calls, database queries, or Kafka operations with WaterDrop, the performance improvement may be immediate and dramatic.


Find WaterDrop on GitHub and explore the async ecosystem that's making Ruby fast again.

Copyright © 2025 Closer to Code

Theme by Anders NorenUp ↑