ViT-5.0, way larger but the size is kept private by OpenAI
ViT-5.0, way larger but the size is kept private by OpenAI
Call for Papers update - ILR+G workshop @iccv.bsky.social
We will now feature a single submission track with new submission dates.
📅 New submission deadline: June 21, 2025
🔗 Submit here: cmt3.research.microsoft.com/ILRnG2025
🌐 More details: ilr-workshop.github.io/ICCVW2025/
#ICCV2025
Some universities give monetary rewards to scientists when they publish, so these researchers may be incentivised to slice a paper to earn more
Update: #ICML sent an email asking reviewers to update reviews and add an "update after rebuttal" section.
Although the review process is far from perfect in ML and CV conferences, I welcome the fact that ICML is trying to improve it.
I completely agree! The issue however is that authors can't engage in the discussion unless reviewers respond or ask for a clarification, and that most reviewers don't
It's stated that "the reviewer is required to acknowledge the response and agree to update the review in light of the response if necessary"
Yes, the discussion is until April 8th. Still, reviewers had to acknowledge and update reviews by April 4th at last
ICML introduced a button for reviewers to acknowledge that they have read rebuttals and will take them into consideration.
The idea sounds nice, but in practice most reviewers (around 75% in my reviewer's batch of papers) just clicked the button without leaving a comment or updating scores...
We're still waiting to hear back from the conference, but I have little expectations at this stage...
It is unfortunately not even discussed so far... I'm in favour of the motion !
Ok, thank you for the answer!
Are there plans to organize a CVPR conference outside of North America?
n/n
Paper: arxiv.org/abs/2502.03227
Code: github.com/pfdp0/min_de... (coming soon)
Results: the method generalizes beyond label supervision on classification and reaches high accuracy on SSL
4/n
We investigate various applications:
- extending the PCA algorithm to non-linear decorrelation
- learning minimally redundant representations for SSL
- learning features that generalize beyond label supervision in supervised learning
Algorithm overview: dependency predictors minimize the reconstruction error by learning how dimensions relate, while the encoder maximizes the error by reducing dependencies
3/n
Our method employs an adversarial game where small networks identify dependencies among feature dimensions, while the main network exploits this information to reduce dependencies.
Example of uncorrelated random variables that are not independent: x_2 = (x_1)^2 with x_1 uniformly distributed on [-1,1]
2/n
Currently, most ML techniques rely on minimizing the covariance between output feature dimensions to extract minimally redundant representations.
Still, this is not sufficient as linearly uncorrelated variables can still exhibit nonlinear relationships.
Did you know that a PCA decomposition or SSL decorrelation techniques (eg Barlow Twins) don't necessarily extract minimally redundant/dependent features?
Our paper explains why and introduces an algorithm for general dependence minimization.
🧵
I miss the video explanation 🎶
Reinforcement learning: read the "popular with friends" feed and follow new accounts.
As a reviewer, it's difficult to check for potential plagiarism (eg from an arXiv preprint) as we don't have access to the authors' names and should avoid breaking anonymity.
Should conferences implement a new "role" dedicated to spot plagiarism?
@cvprconference.bsky.social @iclr-conf.bsky.social
The only real new contribution in our opinion is an evaluation in combination with newer variants of DETR. It's highly unlikely the paper would have been accepted if the reviewers were aware of our earlier work.
Looking closer into the paper, it becomes obvious that the claimed contributions are all rephrasings of ours. For any of the remaining (minor) differences, the two methods are not explicitly compared in the paper, neither experimentally nor in the discussion, although that's what one would expect...
Deliberately concealing the similarities between the two works and reusing our illustrations without properly quoting are clear scientific integrity violations that require to be addressed. We reported the case to the PCs and hope the conference will take proper action.
Our CVPR 2023 paper: arxiv.org/pdf/2307.02402
The ACMMM'24 paper's open review: openreview.net/forum?id=N3y...
🚨 A peer-reviewed publication from MM'24 copied our CVPR 2023 paper! #plagiarism
The authors rephrased our method, but their approach is not different from ours.
Surprisingly, they cited us for general observations but did everything they could to hide our contributions from the readers/reviewers.
Very nice slides, thank you !
Everyone shoud know you can't see if you put a hat over the i's 👀
x[-(k % len(x))]