Friend/foe individual writers on Hacker News
OP, I love not just that you noticed this, but that you thought to post it here too. HN is the best.view on HN →
The boots theory is a concrete way of expressing the risk of ruin, which is the principle advantage of wealth (though our society has layered on many others): the rich can afford to take more risk, and consequently enjoy more reward. A poor person who buys the $50 boots has a much higher risk of coming up short for something else, and that lapse may have disproportionate consequences. So they go for the cheap boots, which end up costing them more in the long run, trapping them in an endless cyclview on HN →
Maybe a better way to accomplish this is a free yearly physical with a doctor? The doctor can then be required to share any changes in disability with the government. Missing x years of appointments also stops your benefits. If you can't come to the doctor maybe they could do a house call?view on HN →
They are! Yudkowsky sat down with Senator Bernie Sanders last month to explain what's at stake, successfully convinced him that it's a big deal, and Sanders has now proposed a national moratorium on AI data centers (https://www.sanders.senate.gov/press-releases/news-sanders-o...) to help slow things down. That's pretty direct, and a lot more useful than random violence by random people.view on HN →
Does this apply to other domains or just AI? For example, if you think gain-of-function research accidents put millions of lives at risk, is the logical next step to quit your job and become a terrorist?view on HN →
Your statement is incorrect.If you really believed what Yudkowsky says you would be taking action that maximizes the chances of reducing a clear and present danger.Between Yudkowsky and the Molotov cocktail guy, which approach do you think had and is having more of an impact?An individual can rarely, if ever, enact change through violence. The history of nearly all successful movements is violence often makes change harder.Rallying people through speech is a far more successful way for an indiviview on HN →
No you wouldn't.Look at what the molotov cocktail guy accomplished by "taking direct action against a clear and present danger": Nothing, besides casting himself as an extremist nut, increasing the resistance to his viewpoint in the population at large.It's downright dumb to attempt to impose your will via unilateral violence when you aren't in a position to actually complete the goal. Note that that goes whether you're actually right or not.view on HN →
It is completely coherent to both think that an extremely bad thing is coming, and yet that does not justify any particular action. "The ends don't justify the means" and literal entire religions have been built on this concept. It is not irrational or incoherent to believe that even something as serious as extinction does not justify arbitrary action.Someone _may_ decide that it does, but it is not a necessary conclusion.And that is completely aside from the many many (in my opinion convincing)view on HN →
Yudkowsky himself also posted a rebuttal today: https://x.com/ESYudkowsky/article/2043601524815716866view on HN →
> The Rational Conclusion of Doomerism Is ViolenceNo it isn't. The most prominent "doomer" has a strong grasp and deep, wholehearted appreciation for the the principles of liberalism and the rule of law:https://x.com/ESYudkowsky/status/2043601524815716866Which the author of this piece of slop appears to lack.view on HN →
Hang on, without a dog in this fight, have I asked the people who trained their whole lives to drive cool cars if this particular cool car, which they were not involved in designing or building, is safe to drive? Is that what you are asking?view on HN →
We have ceded too much ground in this debate. When I say "trans women are women" I mean that, ontologically, it is really true that trans women are a subcategory of the general class "women."Like you say, we are searching for outliers. We don't cut women that are too strong or too tall. We shouldn't cut out women that happen to be trans. If all the top levels of women's sport end up dominated by trans athletes (something I don't see occurring, and that isn't supported by the data), then good, ouview on HN →
> As long as there is a gap between AI and human learning, we do not have AGI.Don't read the statement as a human dunk on LLMs, or even as philosophy.The gap is important because of its special and devastating economic consequences. When the gap becomes truly zero, all human knowledge work is replaceable. From there, with robots, its a short step to all work is replaceable.What's worse, the condition is sufficient but not even necessary. Just as planes can fly without flapping, the economy caview on HN →
Francois here. The scoring metric design choices are detailed in the technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf - the metric is meant to discount brute-force attempts and to reward solving harder levels instead of the tutorial levels. The formula is inspired by the SPL metric from robotics navigation, it's pretty standard, not a brand new thing.We tested ~500 humans over 90 minute sessions in SF, with $115-$140 show up fee (then +$5/game solved). A large fractionview on HN →
No, those aren't issues. But it's good to know the meaning of those numbers we get. For example, 25% is about the average human level (on this category of problems). 100% is either top human level or superhuman level or the information-theoretically optimal level.view on HN →
Reading these posts always make me feel like an imposter. People are dealing with such low level things, while i'm outta here building simple CRUDs.view on HN →
I strongly disagree, and not because of the "Microsoft is associated with bad things and that's a form of violence" points other people mentioned.The end result of treating domestic and sexual abuse like Serious Important Subjects that people should only talk about in a Serious Respectful Tone isn't that people become more mindful of abuse dynamics, it's that they avoid bringing up the subject at all.In practice, yes, abusive practices of corporations echo abusive practices of violent partners,view on HN →
Strongly agree with you (and disagree with GP).I'll also note that the same demand for Serious Respectful Tone never seems to be invoked for metaphors that refer to other kinds of serious crime (including violent crime, such as murder). You can say "great job, you killed it out there", or "oof, my sportsball team got destroyed", etc. etc., and nobody seriously proposes that this somehow devalues life (human or otherwise).view on HN →
Imagine doing AI development in waterfall. You spend weeks writing your prompt, when you think you have it perfect, only then do you submit it to the AI. Then you wait a week or so, and see what it produced, expecting it to be exactly what you wrote.Or, do you tell it the basic functionality you want, test it out, then add feature after feature that you want, sometimes dropping them and sometimes adding new ones that you thought of as your worked.view on HN →
> Agile itself is predicated on software being difficult to ship/expensive.No, the opposite; it is predicated on software being cheap and easy to ship, but hard to correctly anticipate the needs for.> It might not make sense to continue (waterfall might be better actually)Waterfall, not agile, is predicated on software being difficult to ship/expensive.view on HN →
No, because that puts the effort of fighting bad actors on everyone. It means that every day you have new trolls spewing hate in your comments, and that your users have to constantly keep blocking trolls who follow them (and who recruit other trolls to join them) until they get tired and leave the platform.This isn't an academic debate, we've been seeing this play out online for at least 30 years. Probably longer - I wasn't around for Usenet's heyday but it wasn't immune either.view on HN →
you are aware that he doesn't actually believe that he "[...] could have convinced [..]"it's a manner of speecha instrument of telling a storya way to express how completely absurd "US getting involved into Greenland" is for anyone who understands the land (geography/weather) and people even unrelated to geopolitical aspects like alienating alliesview on HN →
Semi related but archive of our own has been down for 2+ days now.view on HN →
Similar CV, similar take. My guess? Anyone involved in automation for >2 years at the enterprise levels knows in their gut all the silent, sudden, annoying ways automation can fail and so has a higher internal bar for "must save this much time to be worth automating."That said, old beliefs should be challenged by new technological capabilities!If LLM based automation is (a) less fragile and (b) quicker to develop, then that bar should be lowered.view on HN →
A question I've been asking myself and which I honestly want to put out there - and I apologize in advance, because you will see me repeat it in other threads, out of genuine curiosity:Does your life have so much friction that you need a digital agent to act on your behalf?Some of the use cases I saw on the OpenClaw website, like "checking me into a flight", are non-issues for me.I work in business automation, but paradoxically I don't think too much about annoyances in my private life. Everythiview on HN →
"Assume good faith" does not mean "extend an unlimited amount of good faith to demonstrably bad-faith actors".view on HN →
Topics like this are where I struggle with HN philosophy. Normally avoiding politics and ideology where possible, created higher quality and more interesting discussions.But how do you even begin to discuss that Tweet or this topic without talking about ideology and to contextualize this with other seemingly unrelated things currently going on in the US?I genuinely don't think I'm conversationally agile enough to both discuss this topic while still able to avoid the political/ideological rabbit-view on HN →
The administration's approach to contracts, agreements, treaties and so on could be summed up as 'I am altering the deal. Pray I do not alter it further.'The basic problem in our polity is that we've collectively transferred the guilty pleasure of aligning a charismatic villain in fiction to doing the same in real life. The top echelons of our government are occupied by celebrities and influencers whose expertise is in performance rather than policy. For years now they've leaned into the aesthetview on HN →
The disconnect here for me is, I assume the DoW and Anthropic signed a contract at some point and that contract most likely stipulated that these are the things they can do and these are the things they can't do.I would assume the original terms the DoW is now railing against were in those original contracts that they signed. In that case it looks like the DoW is acting in bad faith here, they signed the original contact and agreed to those terms, then they went back and said no, you need to remview on HN →
Not that one.I think there's MORe to GAiN from STAyiNg away from this deLicatE storY.Sorry my keyboard acted up and I can't seem to delete that sentence.view on HN →
Oh wow, Sonnet still isn't handling it well:Opus 4.6: Drive (https://claude.ai/share/d57fef01-df32-41f2-b1dc-07de7916bdc7)Opus 4.5: Drive (https://claude.ai/chat/a590cac1-100a-490b-b0a2-df6676e1ae99)Opus 3.0: Walk (https://claude.ai/chat/372c144c-d6eb-43f5-b7ea-fd4c51c681db)Sonnet 4.6: Walk (https://claude.ai/share/1f2a80f3-4741-40a5-8a05-7349ea1a17e5)Sonnet 4.5: Walk (https://claude.ai/share/905afeb6-ffc9-4b4b-a9ee-4481e5cfd527)Favorite answer, using my default custom instructions: "Drive. Walkview on HN →
They didn't disclose it though. It's no different from sticking a bitcoin miner in a video game and telling the user "WARNING DANGER CAUTION ;)"view on HN →
... been saying this for years. If you really believed what Yudkowsky says you wouldn't just be posting on lesswrong, you would be taking direct action against a clear and present danger.view on HN →
It is true that only Yudkowsky gets to say what the rational conclusion of his ideas are. Nobody else gets to speculate. Only the pope of rationalism, because he's the rational one here. See? It's right there in the name!> this piece of slopCitation needed. Or maybe we need to update the title of that children's book for internet arguments: Everyone Who Disagrees With Me Is Slop.The Yud post you linked is not slop, either. It's not LLM-generated, nor is it insincere. But I do have to point out:view on HN →
You're saying some people want a particular end, and that justifies certain illegal, violent, and discriminatory means.I'd say those people support authoritarian politics at the least. Now add in the context of the end in question (less immigration of racialized people) and the means in question (indiscriminately imprisoning minorities), that in itself is well in line with fascism.view on HN →
That's a ridiculous constraint to put on the freedom to enter into contracts.view on HN →
If you are going to insist ontologically that men are women and women are men then words have no meaning and you aren't ceding any ground at all.view on HN →
The gap is important because of its special and devastating economic consequences. When the gap becomes truly zero, all human knowledge work is replaceable. From there, with robots, its a short step to all work is replaceable.I don’t know why statements like this are just taken as gospel fact. There are plenty of economic activities which do not disappear even if an AI can do them.Here’s one: I support certain artists because I care about their particular life story and have seen them perform liview on HN →
> I think it's rather relevant that she affirmatively searched for and found the email?It is. There are lots of relevant facts. Did I claim otherwise?view on HN →
> If you decline the new contract, you're entirely welcome to continue on the old T&C.I think the point of contention here is that in practice, there is no way to continue on the old terms of service/contract. Suppose you're using a note taking app, and one day they update their terms of service to say that they can use your notes to train their AI. "Continued use implies consent," so you are locked into the new terms of service unless you stop using the app right then and there. You are not affview on HN →
> If you decline the new contract, you're entirely welcome to continue on the old T&C.Obviously never true anywhere on Earth.view on HN →
> Clauses existing, have very little to do with it being enforceable.You cannot cancel a contract for "any reason". In most jurisdictions that will be an unenforceable term.Usage here being consent, is an unpublished ruling. It does not set precedent, it refers only to these specific circumstances, where clients sought to understand terms before choosing to continue usage.Legally, usage with foreknowledge, is consent. Usage where you already agreed to terms implicitly, because you sought and undview on HN →
> Moderators are a downright scare resource.if you restrict moderation to stuff like gore and porn, then you don't need that many moderators.> When you let people spew hateful things you drive away the people you want in the communitycan't people just unfollow or block others whose opinions they don't want to see?> Then there's the fact that it takes far more energy to refute bullshit than to spew itthere is no obligation to refute bullshit to begin with. it's a personal choice about how to spenview on HN →
> If the American press had given me 20 minutes of airtime I could have convinced everyone they don’t want to get involved with Greenland.On one hand the author recognizes the scope of the “protocol wars” as a rational thing being irrelevant in the actually relevant time span. On the other hand, the author swears that they can bring rationality to a deeply emotional matter through discourse.view on HN →
Are employees from Anthropic botting this post now? This should be one of the top most voted posts in this website but it's nowhere on the first 3 pages.Also remember, using claude to code might make the company you're working for richer. But you are forgetting your skills (seen it first hand), and you're not learning anything new. Professionally you are downgrading. Your next interview won't be testing your AI skills.view on HN →
I met a coder, who has several self made programming languages, and I would never allow him anywhere near any codebases for which I'm responsible. So writing a Lisp dialect, is not something which makes you a good coder for sure. Even as a junior you can do that. Making it good, and be able to really reason for choices is a different story. I've never seen any good new reasoning from Graham like for example how Dan Abramov do all the time. They are not even close, and definitely not in favor ofview on HN →
This is because it is without thinking enabled. Of course the results are disappointing.view on HN →
Hacker Smacker — Friend and foe writers on Hacker News