I find this plot very telling about the current state of LLMs...maybe more parameters or engineering overheads will help? Source: arcprize.org/arc-agi/2/
I find this plot very telling about the current state of LLMs...maybe more parameters or engineering overheads will help? Source: arcprize.org/arc-agi/2/
I my opinion, I think embedding models need a little bit bigger share of the praise than is due compared to transformers. Whenever there is talk about LLMs most of the argument is warped around transformers, but I think without good embedding infrastructure, this would have been more difficult.
What is true? what is hype? What is an ad for AI? www.bbc.com/news/article...
Maybe the goal is to capture the attention of the said CEO in the hope of a seat at the table? π
I asked Chatgpt (the free version) about "an idea for a scientific paper I can work". Here is the first option from its answer π:
It doesn't get the hidden connection between the two topics, the underlying structure, it lists them as separate......Self organizing is one way to achieve dimensionality reduction! It has no grasp of this.
1) Activity-dependent self-organization models 2) Dimension-reduction models.
This might be a good test case as of why LLMs are not the answer: I tried using Scispace to do a literature review on the "role of visual experience in the development of visual feature maps". At one point it presented two theoretical frameworks for this topic: ........
LLMs have a wider impact on the human mental physique than previously thought
"Stakeholders construct AI differently ... in ways that are useful to them ... and these differences have significant social and educational implications.β codeactsineducation.wordpress.com/2026/01/16/c...
Maybe a not so smart take, but there are four types (or subtypes) of ON-OFF directionally selective retinal ganglion cell in mice, upward, downward, backward and forward (Sans and Mashland 2015).
Yet, why didn't nature use an efficient 2-cell binary code for these directions? Nature makes me π€ͺ.
That was a very amusing article to read. My new favourite quote is "Unfortunately, people have a considerable ability to βexplain awayβ events that are inconsistent with their prior beliefs".
I think there ought to be more support/encouragement for work that independently verifies other results.
How many independent verifications are ok for a result (experimental hypothesis) to be adopted into other work? When I read a new interesting paper, I am always excited and worried. Excited about the prospects of the results and their potential, worried about the certainty/confidence of its claims.
Submissions (short!) due for SNUFA spiking neural networks conference in <2 weeks! π€π§ π§ͺ
forms.cloud.microsoft/e/XkZLavhaJe
More info at snufa.net/2025/
Note that we normally get around 700 participants and recordings go on YouTube and get 100s-1000s views.
Please repost.
The following article is now in press at Psychological Review. Interested to hear what people think! "The successes and failures of Artificial Neural Networks (ANNs) highlight the importance of innate linguistic priors for human language acquisition".
osf.io/preprints/ps... via
I like the saying, don't put all your eggs in one basket, unless you have too many eggs to spare.
I am just thinking about the benefits of investing "too" much on new Data centers in the age where breakthroughs might be just around the corner. I mean, who knows, maybe a new architecture that is more AI friendly (with the appropriate model of course) can emerge in the near future.
If the output of behaviour is an action (motor cortex), then all prior information (from all relevant areas) should converge on the output area (or a pre-staging area) and be integrated? In this case, the temporal order of activity from these areas is relevant to behavior?
Interesting, maybe it would insightful if we could measure the temporal correlations in spike order from different brain areas. Maybe this kind of study is out there, but I couldn't find it yet...1/n
Is brain-wide distributed code at odds with temporal coding? π€ I don't know, this thought has been buzzing my brain lately.
If symbolic reasoning is the peak of the human cognitive evolution, then an AGI should not only reach this level, but evolve beyond and push through to the next level in the cognitive hierarchy.
Academic authors, here's a peek into the black box of journal publishing from an journal editor if you can bear it:
Was my pleasure working on this project, a great avenue to discuss interesting ideas and make new connections.
True, the stimulus is fixed, but the task changes (before, during and after). However, I mean that we change the stimulus during the attentional task (condition D). For example, increase its level by a certain amount or increase the rate and see how it affects the dominant frequency.
The study is great, thought I think it would be also nice to see the response to a change in the stimulus parameters.
I had a quick glance at the paper and I am not an expert, so please take what I say with a grain of salt. I think it is a form of adaptation? I think it is evolutionary advantageous to be alert to change even when you are focused. The stimulus in the study was fixed in duration, frequency and level.
My uneducated, 2 cents, guess and I might be wrong (which is likely the case π).
I think 'physical' shut off is for physical protection to the sensory organs. Neuronal adaptation is the the way to go to filter uninteresting signals. The brain is only interested in change, I think.
Another idea I like is making money through promoting a tutorial about how to make money using AI π.
"10 ways you can use AI to improve your....... (placeholder)"
I can't judge its truthfulness, but I think it might be beneficial to do a study on the positive and negative impacts of prizes, awards, hype and fame on science.