Subject: File Number S7-06-23
From: Russell Nomer
Affiliation:

Mar. 17, 2023

To whom it may concern:
 
As a cybersecurity professional with a 30 year history in technology, I feel compelled to provide feedback on the proposed rules: https://www.sec.gov/rules/proposed/2023/34-97142.pdf
 
Begin by fixing any ambiguity.  For example,  “Proposed Rule 10 would require all Market Entities (Covered Entities and Non-Covered Entities) to establish, maintain, and enforce written policies and procedures that are reasonably designed to address their cybersecurity risks.”  
 
Words such as “reasonably designed” are “S.I.A.M. Words” which is short for Semantically Intentionally Ambiguous Meaning.  Where engineering is concerned, specificity is required. 
 
Would you fund and build a reasonably designed house or do you specify the blueprints and materials for a  four bedroom ranch with a two car garage?  Do you expect a reasonable salary or do you expect to know specifically what compensation you are paid in exchange for the value you are expected to provide?   Do you accept a reasonably designed vacation or would that too be subjective?  The shores of Bayonne or the shores of Fiji?  Which one is reasonable?  What would a reasonable engagement ring look like?  Ring Pop or 3 carat Platinum Round Cut that is GIA certified?   The point here is specificity is required to foster understanding.   Governance must be far more transparent if it is going to accelerate trust.  This rule suffers from “Cool Hand Luke Syndrome” because what you’ve got here is a failure to communicate with sufficient detail!
 
Rather than rely on self-reporting, leverage technology and place the shared responsibility and liability on the product vendors too.  If a drug company brings something to market that causes people to have adverse outcomes, they are held accountable.  Tech companies have been getting away with far too much for far too long despite over 30 years of security professionals demanding more accountability.  
 
For example, look at Microsoft Defender on an E5 Office 365 Instance and show me how it provides feedback to financial services companies obtaining threat intelligence from FS-ISAC or MASNET to indicate whether reported indicators of compromise are covered by design.  
 
You can specify a limited number of custom hash values in the custom detection rule interface, but if an advisory contains just a hash value and you add it to the custom specification, there is no feedback loop from Microsoft indicating whether the value was already included in their detection algorithms. Wasting time and money is never a good outcome. The end result means the cyber defenders have to waste their time inputting content that may be redundant, or wrong because the design and process flow did not factor in proper communication and feedback loops that facilitate the required levels of transparency.  Yet firms are investing over seven figured on such crap without any sound recourse or accountability because, hey it’s Microsoft, they must know what they are doing, right?  Schwartau and Mudge’s testimony to Congress, L0phtcrack and the recent Outlook vulnerability demonstrates a larger fact pattern demanding more accountability that never seems to manifest.
 
You want to do better?  Leverage AI and indicators of breach such as dark web data dumps, searchable content on services like Hack Notice, and cross reference across the threat intelligence feeds in a manner that validates and reports findings to both the impacted organizations and the regulators in real time. If Domain Admin credentials for a publicly traded insurance company are reported by Talos, Recorded Future and other intelligence gathering services, regulator should be automatically notified and asking for details demonstrating adherence to critical controls are in place.  For example, should a firm be able to claim plausible deniability because they did not have a SIEM in place when they have domain admin credentials on a pastebin dump?  They did not get pwned by magic, so leverage the intelligence to drive the desired behviors.   This would be tantamount to how cameras are used by referees to review the plays.  A message indicating, this is what we see happening is far more likely to drive the desired follow up then the expectation that firms will operate as a utopian honor bound society where everyone tells the truth via self-reporting.  The truth is they are more likely to operate like Officer Barbrady from South Park, “Nothing to see here, move along”. Meanwhile, the town is burning.
 
Lastly, any certification that was issued via a remotely Pearson proctored exam is problematic in light of how guaranteed passing on such exams is being sold as a service via Linkedin and WhatsApp solicitations from services that appear to be based out of India.  That is a significant recipe for disaster which I believe is far more prevalent than has been investigated or reported and the risks here go beyond just incompetent resources because integrity is compromised.  Think they are going to self-report?  Think again!
 
Thank you for enduring my feedback and rant.  I hope these perspectives help raise the bar for all.
 
Sincerely,
 
Russell D. Nomer, CISSP