June 09, 2021

Security for MSPs: VDPs, Bug Bounties, and Responsible Disclosure

By Justin Bacco
Cybersecurity

In the year 2021, information security is everyone’s problem. Fortunately, the industry is booming and there’s an unprecedented number of professionals who are willing to help you hunt for vulnerabilities within your environment. Not so fortunately, many of these professionals will often approach businesses unsolicited, offering guidance about a flaw they’ve already discovered in your company’s products or infrastructure.

If Linus’s Law bears any truth, collaboration between yourself and these professionals is key to preventing disaster. So where do you get started?

Vulnerability Disclosure Programs (VDPs)

The very act of hacking into a digital system is a sensitive subject with tremendous potential for legal and financial consequences. But what if you want security professionals to identify flaws within your organization? How do you go about ensuring these professionals will act in your best interest? In order to establish clear boundaries, someone must define some rules of engagement. That someone is you, by defining a Vulnerability Disclosure Program (VDP).

A VDP (where “P” can also mean “Policy”) is a set of guidelines that tell researchers how to act within their best interests. In it, you also provide answers to questions such as:

  • Which of your systems am I allowed to engage with?
  • Who do I contact if I find a vulnerability?
  • When should I expect a response to my emails?
  • Do you promise not to sue me into oblivion?

So how do you go about writing a policy? Where do you start? Well, the bad news is that there is no standard. The good news is — there is no standard!

The rules of engagement can be anything you want them to be. If you’re experiencing writer’s block, it might help to read programs from other companies:

There are also templates you can use:

At the minimum, you should include:

  1. A list of in-scope and out-of-scope of scope systems/software.
  2. Rules and expectations for the researcher in regard to behavior, communication, and reporting.
  3. Ways for the researcher to contact you.
  4. An indication of Safe Harbor, giving the researcher confidence that you won’t take legal action against them.

The security.txt file

Beyond the actual policy itself, there is also a common interest in a special security.txt file that lives on your public web server and provides researchers key information about your program. There’s a fantastic web tool to generate this file, along with deployment instructions located on securitytxt.org.

You may notice that the security.txt file provides a pointer to your PGP key for email encryption. This part is optional (like everything else we’ve discussed so far), but often recommended for those who would opt to encrypt the contents of sensitive emails.

Here are a few ways you can generate a PGP key:

Public vs. private programs

A VDP might come in one of two flavors:

Public programs are those which invite the world; any researcher can participate at any time, and the details of your program are public knowledge.

    • Pro: Ensures you’re getting the most eyes on your systems and products.
    • Con: May require a considerable amount of resources to triage all of the submissions (most of which will likely contain low-quality findings and reports).

    Private programs are invite-only; researchers must first ask for your permission to participate, and the details of your program are unpublished.

    • Pro: Helps keep the volume of issues to a minimum for those without the resources to keep up.
    • Con: May be abused by some vendors as a gag order or as a “minimum effort” showcase.

    It’s often recommended to start with a private program and evolve into a public program after you’re certain you can handle what might be 5-10x the volume of report submissions.

    Vulnerability rewards (“Bug Bounties”)

    While some researchers might work pro bono, the majority would prefer to be compensated for their efforts. This part can get touchy if a researcher demands payment for an effort that was unsolicited.

    You are not required by any law or moral code to reward anyone for any of their unsolicited efforts, nor should you feel obligated to do so if it’s not within your budget. That said, the majority of researchers can and will appreciate any compensation or recognition for their efforts, big or small.

    Some ideas for rewarding researchers:

    • Money, where more critical issues receive higher rewards
    • Company swag
    • Name recognition on a Hall of Fame page
    • Discounts and gift cards for common shops
    • Coins, figurines, or small trinkets symbolizing their accomplishments
    • Paid subscriptions for common hacking tools or learning materials

    The most important part of this section is to set clear expectations from the start. Make certain your rewards policy is clearly defined on your website to minimize the chance of a misunderstanding.

    I want to pay people, but how much is reasonable?

    I feel the need to emphasize that this part is entirely up to you.

    At the time of this writing, Google’s reward amounts for security vulnerabilities range from as low as $100 to as high as $31,337, while Apple’s range between five and seven figures. A more practical example might include Slack’s program, where payout ranges were recently bumped from $100-$1,500 to $250-$5,000 as their program matured. Start small and work your way up.

    One thing to be cautious of is any researcher who demands compensation before submitting their findings. At best this is unethical; at worst it’s blackmail and subject to legal repercussions.

    Remediation timelines and public disclosure

    There are a few basic expectations which all vendors should outline and adhere to:

    1. Provide a timely initial response to the researcher within 1-3 business days.
    2. Make an effort to remediate the issue within a maximum of 90 days.
    3. Provide the researcher routine updates about the issue and its remediation at regular intervals.

    In regard to publicly disclosing any issues discovered, there are three schools of thought:

    Private Disclosure

    The details of the issue are kept confidential. The issue is quietly fixed and neither the researcher nor the vendor ever makes a public statement.

    • Pro: If the details of the issue aren’t publicized, it’s less likely that anyone else will discover it and write an exploit.
    • Con: Those vulnerable to the issue might never become aware of its existence. For this reason, you should be cautious of VDP or Bug Bounty programs that only allow Private Disclosure.

    Full Disclosure (also known as Public Disclosure)

    The details of the issue are publicized as soon as the researcher discovers them.

    • Pro: Puts pressure on the vendor to take initiative and prioritize a patch.
    • Con: Hackers might utilize an exploit before vendors have the opportunity to issue a patch.

    Responsible Disclosure (also known as Coordinated Disclosure)

    The details of the issue are disclosed as a coordinated effort between the researcher and the vendor only after a patch has been made available to the public. This is widely regarded as the most appropriate policy.

    • Pro: Balance between remediation and publication is achieved.
    • Con: Hackers might still use the details to write an exploit for outdated and unmanaged systems.

    Is it possible to hire someone to handle all of this for me?

    It is! If you’re looking for help from experts, there are several companies out there to choose from, with HackerOne and Bugcrowd arguably being the top two most popular choices.

    Does Datto have a VDP?

    We sure do! Visit https://dat.to/vdp for the details.

    Questions, comments, concerns? Feel free to write to me directly at [email protected].

    Suggested Next Reads