Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Being proactive about potential security issues #2101

Closed
ioquatix opened this issue Nov 27, 2018 · 55 comments
Closed

Being proactive about potential security issues #2101

ioquatix opened this issue Nov 27, 2018 · 55 comments

Comments

@ioquatix
Copy link

I know this is probably a very tricky issue, but watching dominictarr/event-stream#116 unfold makes me realise how damaging such an issue can be.

I know from experience that people who no longer maintain gems are often willing to give up ownership.

I wonder if there is a way to minimise the impact of these issues.

For example:

  • Incorporating web-of-trust ideas into RubyGems.
  • Validated organisations (e.g. I trust all gems in the Amazon/AWS organisations).
  • Stable/trusted packages: A Gemfile could specify a certain level of "trustworthiness" required, and gems could receive vetting from the community.
  • Better handling of gems which are no longer maintained - e.g. if a new author is added, it is immediately quarantined (such that bundle update won't use it by default).
  • Automatic detection of suspicious code.
  • Allow gems to be archived or otherwise marked as inactive.
  • Perhaps mark gems older than 5 years with no activity as archived/inactive by default. Don't show these results in searches by default. Perhaps consider after a time to completely delete/remove these gems to free up namespace, but taking into consideration that existing users of such gems (if any) would get an explicit warning about changes.
    • N.B. this already occurs, but manually. Several times I've asked users to give up valuable "names", had a favourable response, yanked all old versions, and released new code. I believe it would be prudent for users of the "old" gem to receive some kind of warning/error - although in many cases this number is going to be 0 or very close to it, excepting malicious behaviour.
  • Define a set of rules for "secure" dependency versions, e.g. only allow explicit versions = x.y.z or "~> x.y". Require a minimum ruby version to be specified. Identify and design characteristics which limit security issues and use then as a minimum water mark for "trustworthiness".
  • Consider whether it's possible to incorporate static security checking tools into trustworthiness evaluation. There are tools for native executables which can detected a wide range of issues, additionally, code-level analysis (checking for eval) might help. Although, Ruby makes this pretty difficult. Maybe Ruby itself needs to provide more security at the interpreter level.

I know this is an impossible problem, but it would probably make sense to raise the bar as high as possible by default, and people who explicitly opt out are accepting those risks.

One aspect of this which I think could be developed further, is the idea of commercial organisations. Normally open source code has no warranty, but some authors might like to say "I guarantee my organisation and the following dependencies/versions are safe/have been checked" and that could be a paid service or managed in a more commercial way in order to facilitate the effort required for checking/vetting packages. You could basically add this trust to your project (i.e. you pay for the secure token/key, add it to your project, and then bundle update will only update things that are explicitly trusted).

@ioquatix
Copy link
Author

I've created a similar issue here: https://bugs.ruby-lang.org/issues/15344

@colby-swandale
Copy link
Member

These ideas sound good but we have to remember that RubyGems.org and RubyGems is almost entirely community driven. Finding developers who are willing to develop but also maintain these features is hard.

The maintainer team for RubyGems & RubyGems.org has been closely following the the recent issues with NPM and have implemented changes where necessary. But at the end of the day and no matter how secure we make it, you're still downloading arbitrary code.

@ioquatix
Copy link
Author

@colby-swandale Thanks for your feedback. I agree and understand all your points.

But at the end of the day and no matter how secure we make it, you're still downloading arbitrary code.

I agree with this, but I also think we can improve the way we trust arbitrary code, e.g. code signing, sandboxing, static analysis, etc. They are all valid techniques that increase the cost of hacking.

One suggestion by @mame was to have a vetted gem repository and a development gem repository.

Even if there are too many ideas, if we find some simple ideas with significant value, we can find developers to implement it, or at least have a road map of the things that would be acceptable PRs.

@ioquatix
Copy link
Author

An example of the trusted/untrusted split for package management could be seen in Arch linux. They have a core repository of packages, and a "AUR" or Arch User Repository. If a package in the AUR gets enough votes, it can be moved to core. Packages installed by AUR are more tricky and there are more warnings. Packages in core are signed by a web of trust.

@colby-swandale
Copy link
Member

To be clear, we absolutely should be looking into improving security in the gem ecosystem but we will be limited to changes that we can realistically work towards. Either contributions from the community or sponsored work from RubyTogether, corporations etc.

@ioquatix
Copy link
Author

That makes total sense, and I think the point of this issue is to identify and make some concrete proposals which then could be funded and/or implemented.

@colby-swandale
Copy link
Member

Yes! If we can write some proposals for improving the gem ecosystem - that would be a great first step!

@djberg96
Copy link

djberg96 commented Nov 27, 2018

Just some initial thoughts...

Thought I'd mention this: https://guides.rubygems.org/security/

While this obviously doesn't solve all the problems, one thing your organization can do is to set a default trust policy, e.g. -P HighSecurity to ensure all of the gems you install are verified and signed.

A few issues with that are 1) I don't think most people sign their gems and 2) Until fairly recently rubygems certs defaulted to 1 year (though in recent versions you can now configure that) and 3) What do you do if the cert expires and the author is AWOL and/or dies?

Some other things I'm not so sure about, e.g. what to do about code that hasn't been updated in X number of years. Some gems are just stable. Perhaps we could combine the gem cert expiration with this idea somehow, e.g. make the max gem cert 5 years, so the author defacto has to update it, if only just the cert, to show that he/she is still around.

@halostatue
Copy link

I’m mostly going to agree with what @djberg96 said, but I also used to sign my gems.

I found that the implementation as it stands is far more trouble than it’s worth, especially since most gems aren’t signed. It makes maintenance for authors harder:

  1. You have to have a place to publish your public certs so that people can add them as trusted certs.
  2. You can only have one cert per email address and one email address per cert, which makes it difficult to have a community-maintained signed gem (should the community trust Rails if it is signed by a different person for each release because of rotating release maintainer responsibilities?).
  3. Secure signed release automation is not possible with the tooling as it stands. (I want to figure out how to automate a regular release of mime-types-data, for example. Unless I upload my single signed cert to an automated server that I may not trust access controls on…I can’t do that.)
  4. If you wish to permit gem 'mygem', '~> 1.0', github: 'mygem/mygem' and mygem is signed, you must have two different versions of your mygem.gemspec because Bundler looks for the signing key if the signing key options are in the gemspec. Tools like Hoe help with this immensely as you can use a plug-in like hoe-gemspec2 to generate two different versions of your .gemspec (although it does not allow you to give them two different names). That gem, BTW, is now four years old because it does its job perfectly and has not needed any maintenance in the time since the most recent version was released (and that version added the ability to strip the signing information for Bundler use).

Yes, we need something, and I don’t know what it is. I am going to be actively looking for co-maintainers for almost all of my gems next year because I have no time to maintain any of them—and I’m going to be looking to hand them completely over after trust has been established and active maintenance has been shown (that I can’t give).

@JonRowe
Copy link

JonRowe commented Nov 30, 2018

Does Rubygems validate the contents of gems against their source? This IMO is a step we could take without changing the end use model and this transparency would have aided in detection of an attack like the eventstream one.

@halostatue
Copy link

It does not—the repositories may not be readily accessible (there’s a well-known Ruby developer who uses/d P4 as their SCM of choice; source verification would mean greatly expanding the RubyGems surface to include SCM code for a lot of different types). The gem is self-contained, and for offline installation must essentially remain that way. This also would not have helped in the event that opened this discussion, as the source matched the release version for a brief time.

Signatures are really the only thing that will help here, but that has two real problems:

  1. The tooling is awful (this is not a blame; getting security tooling right is hard and requires dedicated resources as happened with both deb and rpms) and high-security is not enabled by default; and
  2. There must be a split between blessed packages/packagers (like the Debian release teams) and contributed or third-party packages. When you install nginx or passenger or Erlang on a Debian system, first you install their PGP keys because they’re not trusted by default.

That process would be very hard on the Ruby community and consumers of RubyGems, because I believe that if you’re using high security more, you cannot install gems which aren’t signed—which is the vast majority of them. It’s also not clear to me that Bundler would easily support this.

Enabling cross-signing of gems (even unsigned ones) could help, as that would allow a company security team to vet a version of a gem before allowing it on a private gem server—but who will pay for all of that infrastructure and people time for the tens of thousands of gems—especially those old but stable ones (five years without an update may represent abandonment, or it may represent stability)—that have been released?

@ioquatix
Copy link
Author

Even if we can't validate from source, maybe it's not a silly option to have in some specific cases. e.g. 100% of my gems are backed by git and I'd see that as a useful feature. People who use other source code repos would either miss out or need to provide an appropriate PR to RubyGems.

@ioquatix
Copy link
Author

ioquatix commented Jan 21, 2019

Just touching base again, w.r.t. another issue which popped up recently: https://mobile.twitter.com/pear/status/1086634389465956352

PEAR server is down

A security breach has been found on the http://pear.php.net webserver, with a tainted go-pear.phar discovered. The PEAR website itself has been disabled until a known clean site can be rebuilt. A more detailed announcement will be on the PEAR Blog once it's back online.

If you have downloaded this go-pear.phar in the past six months, you should get a new copy of the same release version from GitHub (pear/pearweb_phars) and compare file hashes. If different, you may have the infected file.

@ioquatix
Copy link
Author

Another feature I recently experience when using npm to release updates:

image

It is useful to notify the author of a release IMHO.

@dgollahon
Copy link

dgollahon commented Apr 5, 2019

First off, I just want to say that I love seeing this stuff discussed. :)

I was listening to this podcast discussing the event-stream incident recently and at 16:45 one of the speakers discusses how, by default, npm will run pre and post install scripts of any of the hundreds or thousands of packages you are installing. I also ran across this little illustrative example. There's apparently a way you can globally enable/disable this by default and manually run certain scripts but the ergonomics don't sound great and I doubt it is widely known/used.

I suspect that the gemspec's extensions mechanism for compiling ruby extensions could work basically the same way as the npm post-install script. Granted, untrusted code will still be run when the library is required, but it's probably worth noting the alternate vector for unexpected/untrusted code especially given all the environments gems get installed to, though I don't know the practical threat level relative to other issues already being discussed.

I don't know if there's a user-friendly way to opting in and out of running certain extension scripts (and this might be more of a bundler problem), but it could also be helpful to mark gems with C extensions in some way and whitelist the ones that have extensions/build scripts that i want to be allowed to run. I probably can't get around a nokogiri dependency (and the CVEs it seasonally carries with it) for a lot of apps, but I might want to know that some small, less commonly used dependency is a C extension and is plausibly more likely to be exploitable (or buggy).

@ioquatix
Copy link
Author

ioquatix commented Apr 5, 2019

I personally really like this post: https://hackernoon.com/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5

Given that once you are installing a gem with native code, or even just plain Ruby code, all bets are off. The only solution to this is some kind of sandboxing.

@dgollahon
Copy link

Fantastic post, thanks for the link.

Yeah, probably so.

@djberg96
Copy link

djberg96 commented Apr 6, 2019

@ioquatix Just reading through the article now, I saw this comment:

But I’m afraid it’s perfectly possible to ship one version of your code to GitHub and a different version to npm.

This particular approach could be challenged somewhat via manifest enforcement, could it not? Perl, for example, uses MANIFEST and MANIFEST.SKIP files that control what is and is not included in the package and what gets unpacked when it's installed. I'm not sure how strict it is if they don't match - it might be a command line switch - but it's something to consider.

https://www.youtube.com/watch?time_continue=38&v=QwjNHqcYig8

So, partially borrowing from the article's example, if you have a lib/bad.rb in your gem that you packaged locally without ever pushing it to github and you tried to ship it without it listed in your manifest, rubygems could either fail to package it initially, or warn the user upon installation that a file has been detected in the source that isn't listed in the manifest. While this obviously isn't fulproof it at least diminishes the potential of that particular shenanigan.

@ioquatix
Copy link
Author

ioquatix commented Apr 6, 2019

I agree that making some kind of cryptographic checksum related to file content, and comparing this to git might help, but not all gems have a git repo, or sometimes they did, but it's gone now, and in any case, this doesn't really solve the problem of bad code being present, because users can probably find a way around it. The only way this really works is if you form all those parts into a web of trust using something like GPG.

Here is such a tool I wrote years ago for this very purpose: https://github.com/ioquatix/fingerprint however it wouldn't work as well in a hostile environment - if you are a malicious actor nothing is stopping you from fingerprinting bad code and distributing it.

@djberg96
Copy link

djberg96 commented Apr 7, 2019

Right, you couldn't do a gitub comparison since a gem author might not be using it, so you would have to mandate a special file created up front. You're right that it wouldn't stop the presence of bad code but it would stop you from shipping files that weren't advertised at least.

@ioquatix
Copy link
Author

ioquatix commented Apr 7, 2019

In any case, I agree there is value in making bad behaviour difficult, but it won't stop malicious actors, they'll just work around it.

@ioquatix
Copy link
Author

ioquatix commented Jul 8, 2019

Well, it happened again: https://news.ycombinator.com/item?id=20377136 -> https://withatwist.dev/strong-password-rubygem-hijacked.html

:(

@simi
Copy link
Member

simi commented Jul 8, 2019

@ioquatix any post-mortem how user account was hijacked? Weak password? 2FA not enabled?

@ioquatix
Copy link
Author

ioquatix commented Jul 8, 2019

I don't know how it happened, but the reality is, even if someone's account is hijacked, there are some suggested mitigations, e.g. signed gems. Of course, we can't solve every problem, but we could make it much harder.

As a result I took a look at my own workflow and decided to sign all my git commits from now on.

@olivierlacan
Copy link
Contributor

olivierlacan commented Jul 8, 2019

I agree with a lot of @ioquatix's ideas to improve things here. But to me the most straightforward vulnerability vector are gems that happen to be under low scrutiny (few releases) yet that are somehow depended on by a large number of other gems or popular. So a mix of reverse dependency count and popularity with low number of recent releases. This means lower scrutiny because an often released gem tends to have more sets of eyes checking it out.

I believe that any gem with a significant reverse dependency count ("infrastructure" gems) or popularity should immediately require MFA/2FA in order to be released. This is something we can enforce today in the RubyGems auth and gem push. It means that even if someone manages to guess/reuse a maintainer's password, they will hit a wall at the release phase.

image

I'm really curious if we could get numbers for how many of the popular gem maintainers have 2FA enabled for their account. I'd wager an extremely low number.

It of course leaves open the possibility of a malicious and obfuscated PR, but those are much harder to achieve for attackers.

@halostatue
Copy link

Until I read the HN coverage, I didn’t know that RubyGems had 2FA. I’ve now enabled it on my accounts.

@jrochkind
Copy link

jrochkind commented Jul 9, 2019

So one kind of attack we have seen recently (I think several times) is simply that someone's rubygems.org credentials have been compromised, and an attacker does a malicious release. The original (real) owner didn't even know it happened.

I think emails from rubygems.org on important security events could go a long way:

  • your password was changed on rubygems.org
  • your email address was changed on rubygems.org (email to old email address)
  • you were removed as an owner on gem X
  • a release of gem Y that you own was done, on this date, this version number

Emails like this, at least the real owner would have a fighting chance to realize a malicious release had been made, when they see activity on their account they were not engaged in. This seems possibly relatively easy to implement, and seems like it would be a giant step forward over counting on some third-party to notice suspicious code in some indirect dependency of a dependency of a dependency that had a patch release.

@jrochkind
Copy link

jrochkind commented Jul 9, 2019

Secondly, if there were some way to integrate the haveibeenpwned.com API, that would probably help a lot.

I think there is probably some relatively low-hanging fruit that would have major benefits, over trying to set up code-signing regimes (which require dealing with the difficult problem of how you know what keys are legit keys, when legit owners will need to change their keys from time to time, and you don't want to let a compromised account change registered/trusted keys).

To the extent people are really into working on code-signing regimes, I think it would be best to find examples of other languages/platforms where a code-signing regime has proven successful in actual practice. In a large ecosystem where projects often have dozens+ dependencies. I think it's a difficult thing to get right, both in terms of security, and in terms of UX so anyone will actually use it (and the intersection of them, UX that makes it hard to use the thing in ways other than it was intended and accidentally have less security than you thought). With the limited development resources available, I don't think trying to do innovative R&D into how to use code signing practically here is probably going to be as productive as the low-hanging fruit or just doing standard things that many have already done before, such as 'of security interest' emails, or other ways to improve the security of rubygems.org accounts.

@indirect
Copy link
Member

indirect commented Jul 9, 2019

@jrochkind we have been working on push emails (maybe they're even already merged?), but if you have time it would be great to go through the code and figure out which of those things still need to be done.

The catch of course is that if your account is compromised, the new owner simply changes your account's email address before doing anything suspicious.

As for code signing... as far as I can tell it would not have helped in any of these cases. Increasing adoption of 2FA, blocking typosquatting, and sending notification emails are the only things that seem like they would help the real-world issues we have seen, and we are working on all of them. 👍

@indirect
Copy link
Member

indirect commented Jul 9, 2019

Oh and your haveibeenpwned.com comment reminded me about a gem I wrote that checks passwords against a bloom filter of the million most common passwords with fallback to HIBP: https://github.com/indirect/unpwn. Maybe someone is interested in sending a PR to integrate that?

We could also potentially turn it on and then reset everyone's password and encourage them to set up 2FA, if we want to be much more secure and also really, really annoy everyone.

@mame
Copy link
Contributor

mame commented Aug 20, 2019

Note that the 2FA enforcement is not a silver bullet. Even if we had enforced 2FA, the case of rest-client would happen. The latest legitimate rest-client gem was released in 2017. (The last time when the compromised account pushed a gem might be earlier.) The attacker who compromised the account just can set 2FA and then push a malicious gem.

@DanielHeath
Copy link

Agree it's not a silver bullet - it's the bare minimum in 2019 (not to shit on the hard work of the volunteers who keep this stuff running at all - y'all owe me nothing, and I owe you a great deal).

@colby-swandale
Copy link
Member

colby-swandale commented Aug 20, 2019

You also have to remember that by setting 2FA as a requirement, we will be breaking a lot of automated systems/processes. Some supported versions of ruby don't ship with a version of RubyGems that supports 2FA.

@colby-swandale
Copy link
Member

I agree that setting 2FA as required is a good thing, but it's not something that we can just simply turn on.

@DanielHeath
Copy link

Those automated systems should be using API keys, right?

We should require 2FA to get the API key rather than requiring it to use the API key.

@olivierlacan
Copy link
Contributor

We can absolutely require 2FA for pushes by gems with large reverse dependency graphs by disallowing pushes when 2FA is not setup and triggering a response to gem push that is explicit and an email to ask maintainers to set up 2FA as an “infrastructure gem maintainer”.

@DanielHeath
Copy link

So, before getting bogged down in details - as a really, really basic starting point - how about I:

  • Write a scope for https://github.com/rubygems/rubygems.org/ which found owners of popular gems without 2fa enabled, and
  • Added a mailer that asked them to enable 2fa, and
  • Asked a maintainer to merge/deploy & send the email to those owners

@jrochkind
Copy link

I still think an automated email to all gem owners on every gem push would go a LONG way while being easy to implement. "Version x.y.z of gemname was pushed by account name. [more stuff, if you think this was a compromise do X]." In this case, there WERE other gem owners who would have seen it and had a good chance of noticing "wait, that's not right."

Additionally, let's say I'm a gem owner collaborating with other people on an open source gem. Let's say I want to make sure that all gem owners have 2FA enabled. Right now I have no way to do that -- I have no way to even KNOW if all gem owners have 2FA enabled. Giving me some way to even check that myself manually would be a major improvement.

Thirdly, these rubygems accounts are getting compromised somehow, and it seems likely that it's from re-using a password used somewhere else. Using the free haveibeenpwned.com compromised password API (it gives you ways to check for compromised passwords using a portion of a hash, without revealing a password or even it's full hash) to prevent password changes to anything in the comromised list, and bulk check existing passwords and require password reset on vulnerable passwords -- would be a huge step.

All of these things are relatively straightforward to do, all of them would have major advances. I realize "relatively straighttforward" still means significant development time to release something robust and reliable. When we discussed this here last, I was told:

If you want to submit PRs for any of your other suggestions, we would be happy to review them.

I could try to negotiate time from my boss to contribute this, or I could work my weekends and evenings on it instead of spending time with my family etc. What would motivate me to do these things is some transparency on where the Ruby Together development money for rubygems.org is going. If it's all going to rubygems.org security and y'all still need more help, that's motivating. If it's going somewhere else, I need an explanation of where and a justification of why this is a higher priority than rubygems.org security to feel motivated to donate my time to a project some are getting paid for. If it does not seem to me like rubygems.org security is being prioritized by whomever is allocating Ruby Together-funded development time, it is not motivating for me to donate my time.

@DanielHeath
Copy link

Ruby together funding is currently enough to support approx 1 fulltime developer of time. That's basically 'caretaker mode', keeping the lights on / dependency upgrades / etc.

https://rubytogether.org/news/2019-07-08-june-2019-monthly-update shows where the money is going. Literally just last month:

password resets gained support for 2FA, API keys are now reset at the same time as passwords, and administrators now have more automation to deal with malicious gems or users more easily.

@hsbt hsbt transferred this issue from rubygems/rubygems Aug 21, 2019
@hsbt
Copy link
Member

hsbt commented Aug 21, 2019

The delivery of email is NOT free charge. We are sponsored by RubyCentral for infrastructure, not RubyTogether.

We are still using the free plan of sendgrid. @sonalkr132 already asked to @evanphx for this email issue.

@bronzdoc
Copy link
Member

@DanielHeath

So, before getting bogged down in details - as a really, really basic starting point - how about I:
Write a scope for https://github.com/rubygems/rubygems.org/ which found owners of popular gems without 2fa enabled, and
Added a mailer that asked them to enable 2fa, and
Asked a maintainer to merge/deploy & send the email to those owners

This is in our plans and was discussed today with the rubygems.org team.

we have plans of sending one time email to all user about 2FA being available. our blocker on this was our free sendgrid account limits won't support this at the moment. But i heard @evanphx is working on that.

@matiaskorhonen also proposed to start out by putting a prominent notice up on rubygems.org for users who haven’t enabled 2FA on their account yet, prompting them to go enable it.

There are a considerable amount of users deploying using CI, requiring 2FA would mean CI push would stop working as @colby-swandale mentioned. But I could work on this if we can all get to a consensus.

@DanielHeath
Copy link

IMO it's fine for an API token to be used to push out a release, provided the API token was obtained via 2FA.

@DanielHeath
Copy link

Sendgrid gives you 100/day on the free plan, but their paid plan is only $15/mo. I don't think working around that is an efficient use of paid developer time.

@rubyFeedback
Copy link

Added a mailer that asked them to enable 2fa, and

Don't do that. Not everyone wants to, or can, enable 2fa, so you would just encourage spamming these. It is fine if the dev wants to do so, and can do so, but this does NOT include everyone.

I am fine with the POSSIBILITY to send emails - but not MANDATORY. I would absolutely hate if rubygems.org would force me into becoming a spammer, or spam me. So it must be an option that developers can decide on.

On a side note, this issue is a bit long, with lots of different ideas and suggestions. I think it would be better to split out separate topics and discuss them separately, too. Right now some of them are lumped together, including different trade offs (some suggestions are perfectly fine and come with barely any trade off; others come with a massive trade off, so they are more problematic).

Edit: I missed an older comment :/ where 2fa was suggested for the more popular gems rather than everyone. I don't have any big gems myself, so I am not the main target audience (my most popular gem has only about 300.000 downloads ... which is better than nothing but far away from the top area). I am still not completely sure if every popular gem maintainer wants to spam downstream users with emails, without being able to control this behaviour (it's ok if this can be controlled by the authors by the way; I just dislike forced behaviour changes).

Otherwise I agree with mame - but as said before, I think it would be better to split the discussions into separate issues. ioquatix really mentioned lots of ideas, which is ok, but it is very, very hard to discuss them more cleanly - we almost already have 50 comments and I am sure that may increase over the coming weeks.

@DanielHeath
Copy link

I think you've misunderstood. I'm suggesting the server that runs rubygems.org should send one email, once, to the members of rubygems.org who are maintainers of popular software.

I'm not asking you to send any email at all.

@ioquatix
Copy link
Author

I started getting email when I publish gems - super awesome - great work everyone!

image

@JonRowe
Copy link

JonRowe commented Sep 9, 2019

Theres a suggestion for the Elixir equivalent (hex.pm) about having diffs of the packages hosted by hex, would this be possible for rubygems?

@mensfeld
Copy link
Member

mensfeld commented Sep 9, 2019

@JonRowe it's already an issue in rubygems. Rubygems team internally uses my code: https://diff.coditsu.io/ it's not OSS (yet) due to some security concerns.

@mensfeld
Copy link
Member

mensfeld commented Sep 9, 2019

ref: #1853

@lmansur
Copy link
Contributor

lmansur commented Sep 17, 2019

I created a PR to allow owners to audit their fellow owners regarding MFA. Should be a good start. #2129

@sonalkr132 sonalkr132 unpinned this issue Mar 9, 2020
@rubygems rubygems locked and limited conversation to collaborators Sep 10, 2021
@hsbt hsbt closed this as completed Sep 10, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests