твит(ов)а
- твит(ов)а, тренутна страница.
- Твитови и одговори
- Медији
Блокирао/ла си корисника @DimaKrotov
Да ли сигурно желиш да видиш ове твитове? Приказивање твитова неће одблокирати корисника @DimaKrotov
-
Закачен твит
What is Dense Associative Memory or Modern Hopfield Network
Our paper will be presented at
#ICLR2021 next week. I want to highlight some of the main results here. Paper: https://arxiv.org/abs/2008.06996 Longer seminar: https://www.youtube.com/watch?v=_QVUyXhu59I … Thread1/Npic.twitter.com/kTYYKIqONv
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
Dmitry Krotov је ретвитовао/ла
Considerations for transparent and explainable AI. My piece for
@itworldca#ResponsibleAI#TrustworthyAI#AIFactSheetshttps://www.itworldcanada.com/blog/building-trustworthy-ai/457849 …Хвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
Thanks for the clear explanation of what is happening in this video. The cliff protection is a rule-based system, subject to hardware constraints. There are no implications about AI (or robotics in general) from this video.https://twitter.com/ben11kehoe/status/1440355689830838280 …
Хвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
Office sweet office! First time in since Feb 2020.pic.twitter.com/fBbn9uT1Aa
Хвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
If you are like me and are wondering about these questions, please check out my new preprint: https://arxiv.org/abs/2107.06446 It turns out Lagrangian functions from physics provide a convenient solution to these problems in multi-layer (or even fully connected) Hopfield Networks.pic.twitter.com/Hgi7WNukz6
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
How can we take advantage of the same degree of architectural modularity that is so important for building powerful feedforward networks, but with guarantees that the network with feedback will have a valid energy function and will converge to a fixed point attractor?pic.twitter.com/psVNQGFlE5
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
One of the great features of deep learning is that we can easily stack multiple layers (e.g. dense, conv, attention) with arbitrary activation functions to build a useful feedforward network. Wouldn’t it be cool if we could do the same for Modern Hopfield Networks with feedback?pic.twitter.com/5fbI7PgJ4n
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
Thanks so much for the invitation, it was fun to participate!https://twitter.com/mtoneva1/status/1390682381493866498 …
Хвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
Last but not least, here are the people who contributed to this work:
@YuchenLiangRPI@wrong_whp@Ben_Hoov@LeopoldGrinberg@navlakha_lab@hen_str@mj_zaki@DimaKrotov 8/Npic.twitter.com/nW6CDizQxPПрикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
What do individual Kenyon cells learn
For instance, the sentence “Senate majority leader discussed the issue with the members of the committee” activates top 4 Kenyon cells that have the receptive fields shown below. Visit http://flyvec.org to poke individual neurons. 7/Npic.twitter.com/VsF7CZ9yxn
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
Our architecture makes it possible to generate the hash codes for individual words, and for words in a context. You can see the nearest neighbor words for context-dependent embeddings of the words "bank" and "apple". The network can find the correct meanings of these tokens. 6/Npic.twitter.com/lKnsEN7WIJ
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
How well does this work
We have compared our FlyVec embeddings with many methods of binarization of continuous word embedding (GloVe, word2vec). FlyVec demonstrates a strong performance across all the hash lengths. It works particularly well at small hash lengths. 5/Npic.twitter.com/gabpFbOkgi
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
In our work we reuse this core computational strategy performed by this network to extract correlations between words and their context from raw text. In biological terminology you can think about context as a “smell” and the word itself as a “visual input”, for example. 4/Npic.twitter.com/2j1PCTIRWO
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
The inhibitory connections of the APL neuron make it possible to encode any input presented to this network as a sparse binary hash code in which the Kenyon cells that are strongly active are assigned the state 1, and the rest of the Kenyon cells are assigned the state 0. 3/N
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
We study the network architecture that belongs to the part of the fruit fly brain called the mushroom body. The major input to this network comes from the olfactory system, but there are also inputs from the neurons that sense temperature, humidity and the visual inputs. 2/Npic.twitter.com/ZQxBDKAeOa
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
Although our paper on fruit fly inspired neural architectures for NLP has already been discussed on twitter, I want to cast a bit more technical light on it.
#ICLR2021 Can a Fruit Fly Learn Word EmbeddingsPaper: https://arxiv.org/abs/2101.06887 Demo: https://flyvec.org/
1/Npic.twitter.com/kfta7XrrUc
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
And what about the conventional Hopfield nets with continuous variables
Can they also be derived as a limiting case of Dense Associative Memory
Yes
Please see the derivation in Appendix B here: https://arxiv.org/abs/2008.06996 7/Npic.twitter.com/BIsJbaAJQl
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
Isn’t it true that Modern Hopfield Networks require many-body synapses, as illustrated in this slide, and for this reason are biologically implausible
No, they can be described using only pair-wise synapses between the neurons. Explained here: https://arxiv.org/abs/2008.06996 6/Npic.twitter.com/WdBTaccCN5
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
Modern Hopfield Networks are defined by the Lagrangian functions for the feature and memory neurons. Depending on these functions they can have neuron-wise activations, contrastive normalization (which reduces to softmax attention if applied once), or divisive normalization. 5/Npic.twitter.com/mfhRKVRpGb
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
What are the desirable properties of Modern Hopfield Networks
They can store a lot of memories, even an exponential (in the dimension of feature space) number of memories
Proof is here: 1. https://arxiv.org/abs/1606.01164 2. https://arxiv.org/abs/1702.01929 3. https://arxiv.org/abs/2008.02217 4/Npic.twitter.com/WiBCgFmvQo
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови -
Why is it important to have an energy function
General systems of non-linear equations can have many complicated behaviors. Because of the energy function, this is not the case for Hopfield nets - the dynamical trajectories always converge to fixed point attractor states. 3/N
Прикажи овај низХвала. Твитер ће користити ово да побољша твоју временску траку. ОпозовиОпозови
Учитавање ће потрајати.
Твитер је можда премашио капацитет или је тренутно наишао на проблем. Покушај поново или посети статус Твитера за више информација.