NEJM – Error

mitigatedchaos:

In the #MeToo era, men’s fear of mentoring builds on the notion that women are not intelligent or perceptive enough to know the difference between a mentor’s good and bad intentions. It is on this foundation of misogyny that the claim that men fear mentoring women is built.

It doesn’t matter for the man if the woman can tell the difference between his good and bad intentions if she herself has bad intentions.

We also have to keep in mind that while there is significant overlap, one of the reasons the sexes are constantly squabbling in the public sphere is that, due to a combination of personality distribution and non-shared assumptions, they are in fact often bad at judging the other sex’s intentions.  Particularly, this problem is ongoing as both sexes deliberately use ambiguity in social situations as part of their social behavior.

(There’s a post floating around Tumblr by a woman that says her jokes about being sexually interested are only jokes if the other person isn’t also interested.  The linked article does some psycho-analysis – we could here, too, but suffice it to say that there are reasons people do that and those reasons aren’t going away.)

And of course, intelligence acts under the influence of ideology, which can prime people to see good or bad intentions where there aren’t any.

I’d say the article is mixed, and some of the recommendations it makes for specific actions are more-or-less good, but there’s definitely this undercurrent of “men are just making up their concerns in order to oppress women” and “men don’t have a right to take actions to protect themselves and their reputations,” both of which are an all-too-common attitude in Feminist thought.

Making family leave policies available to everyone is good and straightforward and helps non-traditional households.

On the other hand, any kind of “implicit bias training” is automatically suspect.

In the #MeToo era, men’s fear of mentoring builds on the notion that women are not intelligent or perceptive enough to know the difference between a mentor’s good and bad intentions. It is on this foundation of misogyny that the claim that men fear mentoring women is built.

“If you think other humans aren’t mind readers, that means you think women are unintelligent and you are a misogynist.”

Well I know what I think the author of this piece is, anyway.

NEJM – Error

Zoo won’t panda to taste, says fruit’s too sweet for its monkey menu

the-grey-tribe:

shieldfoss:

the-grey-tribe:

If you plant an apple seed, you get a wild apple variety, won’t that get you 90% there? Same goes for plums and pears and cherries.

But then you’re running an orchard, not a zoo.

just plant them and let your animals forage when they are in season?

Oh sure, and it makes an authentic kind of sense to have some of the animals native vegetation in the animal enclosure, so it’s a 2-for-1 kind of deal.

But that’s gonna be a pretty big enclosure if it’s supposed to solve the entire problem. Zoos are typically located near cities where people can visit them (and land is expensive) and orchards are typically located way the fuck out in the boonies where huge tracts of land are affordable. Co-locating them seems like a worst-of-both-worlds deal.

Zoo won’t panda to taste, says fruit’s too sweet for its monkey menu

Pakistan sentences Christian man to death for blasphemy

slartibartfastibast:

argumate:

mahamara:

A Christian man has been sentenced to death on blasphemy charges by a court in eastern Pakistan after a close friend accused him of sharing anti-Islamic material, the defendant’s lawyer said.

Blasphemy is a criminal offence in Muslim-majority Pakistan, and insults against the Prophet Mohammad are punishable by death. Most cases are filed against members of minority communities.

Nadeem James, 35, was arrested in July 2016, accused by a friend of sharing material ridiculing the Prophet Mohammad on the WhatsApp messaging service.

Lawyer Riaz Anjum said his client intended to appeal against the verdict, passed on Thursday by a sessions court in the town of Gujrat.

There was widespread outrage across Pakistan last April when student Mashal Khan was beaten to death at his university in Mardan following a dormitory debate about religion.

Police arrested more than 20 students and some faculty members in connection with the killing.

Since then, parliament has considered adding safeguards to the blasphemy laws, a groundbreaking move given the emotive nature of the issue.

While not a single convict has ever been executed for blasphemy in Pakistan, there are currently about 40 people are on death row or serving life sentences for the crime, according to the United States Commission on International Religious Freedom.

Right-wing vigilantes and mobs have taken the law into their own hands, killing at least 69 people over alleged blasphemy since 1990, according to an Al Jazeera tally.

In March, Pakistan’s ex-Prime Minister Nawaz Sharif ordered the immediate removal and blocking of all online content deemed to be “blasphemous” to Islam from social media – and for those responsible to be prosecuted.

In June, 30-year-old Taimoor Raza was sentenced to death for allegedly committed blasphemy on Facebook, a prosecutor said, in the first such case involving social media.

In May, a 10-year-old boy was killed and five others were wounded when a mob attacked a police station in an attempt to lynch a Hindu man charged with blasphemy for allegedly posting an incendiary image on social media.

And in 2011, a bodyguard assassinated Punjab provincial governor Salman Taseer after he called for the blasphemy laws to be reformed.

We Need More Atheism

@argumate Are there Christians doing this anywhere still? To anyone. Or are you just talking out your ass?

Thankfully most Christians have stopped performing Christianity to any real degree, mainly because of all the secularists around them I expect.

Pakistan sentences Christian man to death for blasphemy

SEC charges Tesla CEO Elon Musk with fraud

argumate:

eightyonekilograms:

argumate:

cryptid-sighting:

“According to Musk, he calculated the $420 price per share based on a 20% premium over that day’s closing share price because he thought 20% was a ‘standard premium’ in going-private transaction,” the SEC alleged in its suit. “This calculation resulted in a price of $419, and Musk stated that he rounded the price up to $420 because he had recently learned about the number’s significance in marijuana culture and thought his girlfriend ‘would find it funny, which admittedly is not a great reason to pick a price.’”

I’m losing it

Grimes is now being called out in an SEC complaint, this is amazing.

Musk knew that he (1) had not agreed upon any terms for a going-private transaction with the Fund or any other funding source; (2) had no further substantive communications with representatives of the Fund beyond their 30 to 45 minute meeting on July 31; (3) had never discussed a going-private transaction at a share price of $420 with any potential funding source; (4) had not contacted any additional potential strategic investors to assess their interest in participating in a going-private transaction; (5) had not contacted existing Tesla shareholders to assess their interest in remaining invested in Tesla as a private company; (6) had not formally retained any legal or financial advisors to assist with a going-private transaction; (7) had not determined whether retail investors could remain invested in Tesla as a private company; (8) had not determined whether there were restrictions on illiquid holdings by Tesla’s institutional investors; and (9) had not determined what regulatory approvals would be required or whether they could be satisfied.

“At 1:00 PM EDT, approximately 12 minutes after Musk published his tweet stating, ‘Am considering taking Tesla private at $420. Funding secured,’ Tesla’s own head of Investor Relations sent a text to Musk’s chief of staff asking, ‘Was this text legit?’” the complaint says.

“Among other remedies, the SEC is seeking to bar Musk from serving as an officer or director of a publicly traded company if found guilty.”

yeah I’d say so.

Felon Musk

SEC charges Tesla CEO Elon Musk with fraud

Persuasive Language for Language Security: Making the case for software safety

nuclearspaceheater:

ms-demeanor:

shieldfoss:

ms-demeanor:

stumpyjoepete:

It’s like 50/50 that I found this via work rather than via tumblr, so apologies if I’m failing to credit someone I follow for introducing me to this.

Some humorous bits:

So I brought up hacking contests and I asked this young man what the word Cyber meant. He told me that cyber was a word used exclusively by people in government to let everyone know that they didn’t understand how computers worked. I think maybe he was on to something. I think this definition is still universally accepted in the hacker community.

I was informed that a network monitoring approach was so effective that it continually discovered zero-day malware. To this day I don’t know what that means.

More seriously, the talk is making a case that the security community does a terrible job of communicating its ideas to policy folks (and additionally, that policy folks are not idiots to be ignored).

One point he makes is that the way the security community talks about exploits (which has fed into some hyperbolic warfare metaphors) encourages policy folks to see them as basically equivalent to weapons and to imagine that banning them will solve the problem. Worse, security folks respond by trying to explain technical details, rather than reaching for better metaphors or other persuasive techniques.

Around 2004 the global internet was struck by a worm that contained a remote exploit–a Von Neumann UDP probe, if you will–that came to be known as SASSER because of the name of the process it knocked over. I worked for a pentesting team that developed automation the day my company was struck by Sasser. My company was a global company with a Class A network and tens of thousands of globally distributed computers. The worm crippled us in hours. My team and I hatched a plan to scan our entire class A using our toolset, automatically throw the same exploit as the worm, get a shell on computers in our company, and install the patch. We got authorization to execute this plan, executed it, patched thousands of computers all around the world, and ended the worm outbreak on our network without a single administrator-installed patch, put a massive organization back in business, and got home in time for dinner. To do this, we built a framework which handled and used remote exploits and launched them against thousands of computers all over the globe. This framework engaged in global automated exploitation for the purpose of authorized access. I literally got to hack the planet- a fact I only realized years later.

So, that’s the story. I would tell this story, and the folks on the other side of the table would pause a bit, then they would say “so you attacked your own network”?

And I would say, “no, we defended our network with a tool called an automated exploitation framework”.

And then they would look again at the proposed ban with new eyes.

He also brings up the idea that an exploit, at its heart, is really a “proof of vulnerability” (an idea he credits to Meredith Patterson):

Now I could explain this language choice easily: an exploit, free of context, is a word that will make people think it’s an attack; banning exploits and their containers is simply banning the act of attacking. A proof, free of context, is scientific evidence, and banning proofs is an attack on science. Of course, I would never explain this, because if you’re explaining, you’re losing. Let’s stop losing. No one should attack science.

If we describe our complex, dual-use landscape of information/code with single-use language, we will summon mythical monsters and see them used against us. The solution is clear. Rewrite the cyberwarfare secret decoder ring, using absolutely no jargon. There are no attack or defense tools.

Describe powerful things with simple, precise language that expresses their core truth. No one in power should ever be put in a position where they act against something whose true nature was hidden from them. This is our communications problem, our failure, and we can solve it.

My entire brand in my local hacker group is “Talking to non-techs is a valuable skill so don’t treat people like shit because they don’t spend their weekends coding, asshole.”

Which is also something that I have to explain to my boss about his customers pretty regularly.

It’s super fucking hard for people to take you seriously about security when you don’t explain risk in terms they can understand and it’s super fucking hard for you to effectively explain things when you think doing so is beneath you.

Yeah, the way I see it is, if “explaining things to customers” is beneath you, I’m just going to assume “getting paid by customers” is also beneath you.

See, you would think so but that’s actually a fucking huge part of the problem I have with how tech as a whole works when it has to interface with non-technical users.

We *don’t* have to explain things to customers; they are still going to need our services and the other little IT companies and computer repair shops in the area have the exact same attitude. It’s shitty. But it’s a totally sound business decision. What choices do our customers have? Go to Geek Squad? Hire an actual IT team? We’re hitting the sweet spot of “knows how to work on servers” and “less than $500 a month for a 2-hour service contract” – these customers aren’t going anywhere. We’re not even a pick-two industry, we’re a pick-one industry:

And the IT team at the power company has the same attitude about management at the power company. The security group at the nonprofit has the same attitude about the volunteers. The dev team has the same attitude about the design team. The programmers have the same attitude about the payroll department.

Condescending techie who can’t believe you were such an idiot that you forgzapped the snangdooble even though it was clearly marked “No Uxpanking” are a stereotype because they’re everywhere.

And I get that we can’t teach every tech-illiterate person everything we know to get them up to speed for a conversation about protocol, but if we’re to the point that serious-but-tech-illiterate people are saying “ban exploits” we’ve clearly failed as an industry.

You make it sound like a cooperative result of techie-solidarity when you put it like that. Like an informal union: as long as everybody in the meta-industry is on the same page, then no individual can be pressured to be understandable, but if some people start speaking plainly to non-techies for their own advantage, then soon enough everyone will have to.

but if some people start speaking plainly to non-techies for their own advantage, then soon enough everyone will have to.

Not necessarily so!

If the supply/demand curve of techies is sufficiently skewed, it might be a situation where even the caustic techies make enough money to be satisfied. The ones that know how to work with people just earn even more.

(This is actually pretty much how my experience in the industry has been tbh.)

Persuasive Language for Language Security: Making the case for software safety

Persuasive Language for Language Security: Making the case for software safety

ms-demeanor:

shieldfoss:

ms-demeanor:

stumpyjoepete:

It’s like 50/50 that I found this via work rather than via tumblr, so apologies if I’m failing to credit someone I follow for introducing me to this.

Some humorous bits:

So I brought up hacking contests and I asked this young man what the word Cyber meant. He told me that cyber was a word used exclusively by people in government to let everyone know that they didn’t understand how computers worked. I think maybe he was on to something. I think this definition is still universally accepted in the hacker community.

I was informed that a network monitoring approach was so effective that it continually discovered zero-day malware. To this day I don’t know what that means.

More seriously, the talk is making a case that the security community does a terrible job of communicating its ideas to policy folks (and additionally, that policy folks are not idiots to be ignored).

One point he makes is that the way the security community talks about exploits (which has fed into some hyperbolic warfare metaphors) encourages policy folks to see them as basically equivalent to weapons and to imagine that banning them will solve the problem. Worse, security folks respond by trying to explain technical details, rather than reaching for better metaphors or other persuasive techniques.

Around 2004 the global internet was struck by a worm that contained a remote exploit–a Von Neumann UDP probe, if you will–that came to be known as SASSER because of the name of the process it knocked over. I worked for a pentesting team that developed automation the day my company was struck by Sasser. My company was a global company with a Class A network and tens of thousands of globally distributed computers. The worm crippled us in hours. My team and I hatched a plan to scan our entire class A using our toolset, automatically throw the same exploit as the worm, get a shell on computers in our company, and install the patch. We got authorization to execute this plan, executed it, patched thousands of computers all around the world, and ended the worm outbreak on our network without a single administrator-installed patch, put a massive organization back in business, and got home in time for dinner. To do this, we built a framework which handled and used remote exploits and launched them against thousands of computers all over the globe. This framework engaged in global automated exploitation for the purpose of authorized access. I literally got to hack the planet- a fact I only realized years later.

So, that’s the story. I would tell this story, and the folks on the other side of the table would pause a bit, then they would say “so you attacked your own network”?

And I would say, “no, we defended our network with a tool called an automated exploitation framework”.

And then they would look again at the proposed ban with new eyes.

He also brings up the idea that an exploit, at its heart, is really a “proof of vulnerability” (an idea he credits to Meredith Patterson):

Now I could explain this language choice easily: an exploit, free of context, is a word that will make people think it’s an attack; banning exploits and their containers is simply banning the act of attacking. A proof, free of context, is scientific evidence, and banning proofs is an attack on science. Of course, I would never explain this, because if you’re explaining, you’re losing. Let’s stop losing. No one should attack science.

If we describe our complex, dual-use landscape of information/code with single-use language, we will summon mythical monsters and see them used against us. The solution is clear. Rewrite the cyberwarfare secret decoder ring, using absolutely no jargon. There are no attack or defense tools.

Describe powerful things with simple, precise language that expresses their core truth. No one in power should ever be put in a position where they act against something whose true nature was hidden from them. This is our communications problem, our failure, and we can solve it.

My entire brand in my local hacker group is “Talking to non-techs is a valuable skill so don’t treat people like shit because they don’t spend their weekends coding, asshole.”

Which is also something that I have to explain to my boss about his customers pretty regularly.

It’s super fucking hard for people to take you seriously about security when you don’t explain risk in terms they can understand and it’s super fucking hard for you to effectively explain things when you think doing so is beneath you.

Yeah, the way I see it is, if “explaining things to customers” is beneath you, I’m just going to assume “getting paid by customers” is also beneath you.

See, you would think so but that’s actually a fucking huge part of the problem I have with how tech as a whole works when it has to interface with non-technical users.

We *don’t* have to explain things to customers; they are still going to need our services and the other little IT companies and computer repair shops in the area have the exact same attitude. It’s shitty. But it’s a totally sound business decision. What choices do our customers have? Go to Geek Squad? Hire an actual IT team? We’re hitting the sweet spot of “knows how to work on servers” and “less than $500 a month for a 2-hour service contract” – these customers aren’t going anywhere. We’re not even a pick-two industry, we’re a pick-one industry:

And the IT team at the power company has the same attitude about management at the power company. The security group at the nonprofit has the same attitude about the volunteers. The dev team has the same attitude about the design team. The programmers have the same attitude about the payroll department.

Condescending techie who can’t believe you were such an idiot that you forgzapped the snangdooble even though it was clearly marked “No Uxpanking” are a stereotype because they’re everywhere.

And I get that we can’t teach every tech-illiterate person everything we know to get them up to speed for a conversation about protocol, but if we’re to the point that serious-but-tech-illiterate people are saying “ban exploits” we’ve clearly failed as an industry.

Ah.

Our customers typically have IT departments and compliance standards.

Everything we do, we have to demonstrate that we can do it cheaper and better than their own IT department, and since their own IT department has to implement it, they have to agree.

(The way to phrase it so you sell an IT department on doing their job for them is typically “Wouldn’t it be easier for you if you didn’t have to worry about $RegulatoryIssue and we just handled it, so you could spend your time working on $CoreBusinessFunction which is what really brings value to your company?)

Persuasive Language for Language Security: Making the case for software safety

Persuasive Language for Language Security: Making the case for software safety

ms-demeanor:

stumpyjoepete:

It’s like 50/50 that I found this via work rather than via tumblr, so apologies if I’m failing to credit someone I follow for introducing me to this.

Some humorous bits:

So I brought up hacking contests and I asked this young man what the word Cyber meant. He told me that cyber was a word used exclusively by people in government to let everyone know that they didn’t understand how computers worked. I think maybe he was on to something. I think this definition is still universally accepted in the hacker community.

I was informed that a network monitoring approach was so effective that it continually discovered zero-day malware. To this day I don’t know what that means.

More seriously, the talk is making a case that the security community does a terrible job of communicating its ideas to policy folks (and additionally, that policy folks are not idiots to be ignored).

One point he makes is that the way the security community talks about exploits (which has fed into some hyperbolic warfare metaphors) encourages policy folks to see them as basically equivalent to weapons and to imagine that banning them will solve the problem. Worse, security folks respond by trying to explain technical details, rather than reaching for better metaphors or other persuasive techniques.

Around 2004 the global internet was struck by a worm that contained a remote exploit–a Von Neumann UDP probe, if you will–that came to be known as SASSER because of the name of the process it knocked over. I worked for a pentesting team that developed automation the day my company was struck by Sasser. My company was a global company with a Class A network and tens of thousands of globally distributed computers. The worm crippled us in hours. My team and I hatched a plan to scan our entire class A using our toolset, automatically throw the same exploit as the worm, get a shell on computers in our company, and install the patch. We got authorization to execute this plan, executed it, patched thousands of computers all around the world, and ended the worm outbreak on our network without a single administrator-installed patch, put a massive organization back in business, and got home in time for dinner. To do this, we built a framework which handled and used remote exploits and launched them against thousands of computers all over the globe. This framework engaged in global automated exploitation for the purpose of authorized access. I literally got to hack the planet- a fact I only realized years later.

So, that’s the story. I would tell this story, and the folks on the other side of the table would pause a bit, then they would say “so you attacked your own network”?

And I would say, “no, we defended our network with a tool called an automated exploitation framework”.

And then they would look again at the proposed ban with new eyes.

He also brings up the idea that an exploit, at its heart, is really a “proof of vulnerability” (an idea he credits to Meredith Patterson):

Now I could explain this language choice easily: an exploit, free of context, is a word that will make people think it’s an attack; banning exploits and their containers is simply banning the act of attacking. A proof, free of context, is scientific evidence, and banning proofs is an attack on science. Of course, I would never explain this, because if you’re explaining, you’re losing. Let’s stop losing. No one should attack science.

If we describe our complex, dual-use landscape of information/code with single-use language, we will summon mythical monsters and see them used against us. The solution is clear. Rewrite the cyberwarfare secret decoder ring, using absolutely no jargon. There are no attack or defense tools.

Describe powerful things with simple, precise language that expresses their core truth. No one in power should ever be put in a position where they act against something whose true nature was hidden from them. This is our communications problem, our failure, and we can solve it.

My entire brand in my local hacker group is “Talking to non-techs is a valuable skill so don’t treat people like shit because they don’t spend their weekends coding, asshole.”

Which is also something that I have to explain to my boss about his customers pretty regularly.

It’s super fucking hard for people to take you seriously about security when you don’t explain risk in terms they can understand and it’s super fucking hard for you to effectively explain things when you think doing so is beneath you.

Yeah, the way I see it is, if “explaining things to customers” is beneath you, I’m just going to assume “getting paid by customers” is also beneath you.

Persuasive Language for Language Security: Making the case for software safety