• 0 Posts
  • 39 Comments
Joined 11 months ago
cake
Cake day: July 28th, 2023

help-circle




  • London is full of excellent amazing things but they’re spread out over an absurdly large area so it’s such a pain doing anything. And everyone who lives there is so numb to it! They’ll happily indulge every day in 3-4 hours of public transport as if this is a rational way to live.

    I’m very happy that they have a reasonably decent transit system, but fuck me I wanted those 4 hours in my life actually.








  • Base 10 on your hands is really base 1. Every finger is either 0 or 1 and we just count them! Base 12 we do have 12 positions each representing a digit, and two potential digits from our hands.

    Binary is so much more efficient because you have 10 digits, just like in base 1, but you use them more efficiently.

    The next logical step is trinary, if we can incorporate enough fingers it would go higher than binary. Wikipedia suggests three positions of your fingers - up, down, and somewhere in between, or folded - but I’d be surprised if anyone can realistically do that with all their fingers. However, using four fingers on each hand and pointing them at different knuckles/the tip of your thumb gets you 8 digits of base 4 (including not pointing at the thumb at all as 0)… And actually doesn’t tangle your fingers up too bad.



  • If only we could combine the two and get to 2^12… Sadly, this would require 12 thumbs.

    Ooh, actually you can get to 2^8 without worrying about those pesky tendon issues by putting your fingertips against your thumb instead of trying to extend your fingers… Hmmm… Maybe we can even go to 2^10 this way by incorporating knuckles. Might lose some time today figuring out more hand counting systems. I wonder if anything higher than 2^10 is possible…


  • The Turing test is flawed, because while it is supposed to test for intelligence it really just tests for a convincing fake. Depending on how you set it up I wouldn’t be surprised if a modern LLM could pass it, at least some of the time. That doesn’t mean they are intelligent, they aren’t, but I don’t think the Turing test is good justification.

    For me the only justification you need is that they predict one word (or even letter!) at a time. ChatGPT doesn’t plan a whole sentence out in advance, it works token by token… The input to each prediction is just everything so far, up to the last word. When it starts writing “As…” it has no concept of the fact that it’s going to write “…an AI A language model” until it gets through those words.

    Frankly, given that fact it’s amazing that LLMs can be as powerful as they are. They don’t check anything, think about their answer, or even consider how to phrase a sentence. Everything they do comes from predicting the next token… An incredible piece of technology, despite it’s obvious flaws.




  • Writing boring shit is LLM dream stuff. Especially tedious corpo shit. I have to write letters and such a lot, it makes it so much easier having a machine that can summarise material and write it in dry corporate language in 10 seconds. I already have to proof read my own writing, and there’s almost always 1 or 2 other approvers, so checking it for errors is no extra effort.


  • You’re right to be sceptical. The paper is poorly written, and overstates many of the results they found. The correlations identified between the car score and the dark tetrad scores aren’t really very high, the highest is 0.51! They produced a regression model and deduced that because the F-test had a low p value that the dark tetrad scores predicted the car score. The F-test, for clarity, determines if a model predicts the response variable better than a model with no explanatory variables.

    Also worth noting that there were stronger correlations between the explanatory variables than for any of the explanatory variables with the response. They should have included interactions in their regression model to incorporate this, or even better tried a set of models and compared them with ANOVA or similar. As is it’s impossible to say if the model they found is actually very good. It only explains 29% of the variance which… Well, it’s a statistic which is better for comparing models, but it suggests quite clearly they most of the variance in the car score is not explained but the dark tetrad scores.

    There’s a smattering of evidence in here that there’s some statistical link between the scores, but it’s not been well explored or presented, and there are issues with the statistical approach. Based on some comments in the discussion section I’d agree with your suggestion that the author is simply trying to confirm their hypothesis.