# whoami * Florian Gilcher * co-founder Ferrous Systems * co-founder Rust Foundation * ex-rust core & community team * Founder RustFest, OxidzieConf * Used to work at SAP Security & Trust --- # previously * Ruby community --- # currently * https://ferrocene.dev * Bringing Rust into functional safety --- # Also * 2nd Dan [Kyudo](https://en.wikipedia.org/wiki/Ky%C5%ABd%C5%8D) * Certified rescue diver * & first responder --- # A talk about Experiences * Sample size **ONE** * Disagreeable * An offer, not a recommendation --- ## Before we start, some clichés. --- ![](https://ferroussystems.hackmd.io/_uploads/SycsYatLC.jpg) --- ![](https://ferroussystems.hackmd.io/_uploads/Hy6z9TF8C.jpg) --- ![](https://ferroussystems.hackmd.io/_uploads/rJKiz_KL0.jpg) --- ![](https://ferroussystems.hackmd.io/_uploads/B1QHj9KLR.jpg) --- Why do I want to talk about trust? --- Because I fundamentally believe: a) Socials are _are_ tech b) We're doing a poor job on the trust side --- ## Socials are tech --- Meyerovich & Rabkin, 2013 [2012: Socio-PLT: Principles for Programming Language Adoption]( https://lmeyerov.github.io/projects/socioplt/paper0413.pdf) [2013: Empirical Analysis of Programming Language Adoption](https://raw.githubusercontent.com/lmeyerov/lmeyerov.github.io/master/projects/socioplt/papers/oopsla2013.pdf) --- ## Core Finding > **Social Factors outweigh Intrinsics.** in language adoption. --- * Programmers do actually not value correctness high, expressivity is way more important. * Programmers value correctness *more*, the larger the organisation gets. --- My take 2024: * those values are still in place * organsational and system complexity is growing * correctness gets a higher weight --- Put the other way: * 2013: "how do we hyperscale?" * 2024: "how do we deal with the hyperscale?" --- Bird, Murphy, Nagappan, Zimmermann [Empirical Software Engineering at Microsoft Research, 2011](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/show104e-bird.pdf) --- Coins the term > Social Technical Congruence --- # Relevant Key Finding > We found that using technical and contribution relationships together have more power than either in isolation for predicting bugs --- ## Advances in Safety [STPA - System-Theoretic Process Analysis](https://psas.scripts.mit.edu/home/get_file.php?name=STPA_handbook.pdf) --- ## Core goal > STPA includes software and human operators in the analysis, ensuring that the hazard analysis includes all potential causal factors in losses. --- It particularly allows for non-technical modelling, where e.g. hierarchical relationships do _intentionally_ not imply obedience. Which is fit for humans. --- ## We're doing a poor job on the trust side --- ## A Detour: Walter Bright > I believe memory safety will kill C – Walter Bright, DConf 2017 https://www.youtube.com/watch?v=GcFuAptUExE&t=1455s --- Walter postulates: * The problem of Memory Safety is already here, the effect isn’t * Memory Safety will become tool requirement * The pressure will not come out of "the community" --- So where does the pressure come from? --- US NAVY, 2022 ["We have 15 years of track record that proves that the current approach to cybersecurity, driven by a checklist mentality, is wrong, it doesn’t work."](https://www.defensenews.com/digital-show-dailies/navy-league/2022/04/05/us-navy-had-cybersecurity-wrong-expect-change/) --- Following: * NSA/CISA/BSI communication about memory safety * UN 155,156 - Cybersecurity mandates * The Cyber Resilience Act in the EU, which even imposes executive liability --- Governments don't trust us as an industry to fix our stuff ourselves. --- "It's open, you can look at it" is _not_ a trust-building activity and we're slowly figuring that out. --- # [polyfill.js got backdoored](https://thehackernews.com/2024/06/over-110000-websites-affected-by.html) This used to be an XZ slide, but there's fresh new every day. --- We're standardising the technical side of trust. --- So, how do we get better on the social side of trust? --- # tRust --- I'll be looking at a few situations from projects I work(ed) in: * Rust Project * rust-analyzer * Ferrocene --- How did we make those trustable projects? --- Focus on: * Trust-building activities that are relatively easy to apply * Preferable, are close to engineering best practice anyways --- # Kinds of trust * In-group trust * Out-group trust * General trust --- # Internal, in-group Trust --- * Relatively stable over time * Important for all out-bound activities * Builds a kind of organisational self-confidence --- "Can I turn my back to other project members and be happy with their work?" --- "Can I do my work without constantly checking back on others?" --- Useful activities for creating in-group trust: * Leadership structure and education * Clear expectations, e.g. on time commitment * Avoidance of judgement * High standards without perfection * Explicitness, particularly accross cultural boundaries --- In-group-trust is the basis for everything! --- # External trust: the masses out there. And they are all individuals... --- # The Rust project: massively emerging for years --- The Rust environment between 2014 and 2024 changed substantially. --- Each phase needs different approaches. --- # Rust community team * Be available * Be around - but not everywhere * Have consistent messaging * Have consistent structures that can evolve in themselves * Good artists steal --- # community@rust-lang.org * catch-all email * private * "the community switchboard" * Active outreach * Great task for the busy --- ## Simple, constant, communication rules * Spend time on those who want to be convinced * No comparative communication, we can only lose * Fun is okay, but no edgyness * We're all individuals * 10 minutes a week > 2 hours every month --- # Mindset Our users are our friends and supporters, not leeches. --- ## People adopting your technologies connect their careers to you! --- This is fundamentally different from: * The hierarchical idea of a maintainer <> user relationship * From the idea that you don't owe users anything * Not taking responsibility from your downstream --- ## Aside: Disagreement is a gift We had good experience setting up calls with dissatisfied and frustrated people. --- ## Aside: Anger, Frustration & Disagreement are not CoC violations (expressing them in boundary-violating ways is) --- # Trust in Disaster: Dealing with the sudden In 2020, Mozilla layed of 250 people. And "the Rust team". --- 1 week later: [Laying the foundation for Rust's future](https://blog.rust-lang.org/2020/08/18/laying-the-foundation-for-rusts-future.html) --- Issues: * No ahead notice * We have a good relationship to Mozilla, which we do not want to harm * A lot of our friends were layed of, we can't ignore that * We need to communicate to the Rust project * We need to communicate to the public * We have privileged information that is positive, but we cannot disclose * We have a committment by Mozilla in helping us to move to a foundation --- AND: * There's discussion in the project * There's discussions outside the project --- * This takes time! * You need to communicate! * Don't be pushed by artificial pressure! * You can commit on future communication! * If people trust you, they will trust your plea for time. * This is the moment for your strongest communicators. --- ## A word about transparency * Transparency is not the goal here * Transparency is not high on my list * It's important to **distill the truth** --- # rust-analyzer: small, coherent sponsors base --- rust-analyzer is a sponsored project, hosted at Ferrous Systems. --- * rust-analyzer gets funding of about 60000 EUR each year. * Many sponsors are individuals or small businesses * They don't have a lot of capacity for conversation * They want their money to be used well --- ## This week in Rust Analyzer rust-analyzer releases an alpha every Monday, with a [changelog](https://rust-analyzer.github.io/thisweek) of the work done during the week. --- ## Like clockwork After doing that for 5 years straight, it's a 30 minute thing. --- Result: weekly proof that your money is working. --- Aside: we just our largest sponsor, consider sponsoring :). --- # Trust by Following Standards: Ferrocene --- [Ferrocene](https://ferrocene.dev) is Ferrous Systems safety-certified Rust compiler. [Public Documentation](https://public-docs.ferrocene.dev) it's also the only one with fully open source docs. --- Safety-critical standards by and large work by: * Mandate issues that need to be covered (requirement) * Leave it (relatively) open to you how you address the issue (activities) * Provide documentation on the successful execution of the activites (evidences) --- * ISO 26262: "You need to do structured quality management" * Us: there's a standard for that (ISO 9001), we follow that * Outcome: we're ISO 9001 certified by TÜV SÜD --- Lots of paperwork? --- Standards have advantages: * They have a lot of mindshare * They are globally accepted and often translated * They can be a trust proxy * They force you to document --- Standards have disadvantages: * Mapping standards to your actual problem may be hard work * It needs experts in the standard and in the tool * Keeping the work effective to your work is hard * Often closed, for pay --- # Standards as inspiration You don't need to get certified to get use out of a standard. --- e.g.: we evaluated [OpenChain](https://openchainproject.org/), adopted the things that made sense, but didn't get certified. --- # Summary --- Good trust building is: * Always custom to _your_ environment * Done gradually, consistent, with measure * Preferably _within_ existing activities * A team effort * A creative, fun and ultimately rewarding act --- # Questions? & Experiences!
{"title":"tRust","tags":"Talk","description":"View the slide with \"Slide Mode\"."}
    206 views