Dmitry Krotov

@DimaKrotov

I am a physicist working on neural networks and machine learning, . Formerly: ,

Cambridge, MA
Придружио/ла се децембар 2011.

твит(ов)а

Блокирао/ла си корисника @DimaKrotov

Да ли сигурно желиш да видиш ове твитове? Приказивање твитова неће одблокирати корисника @DimaKrotov

  1. Закачен твит
    29. апр

    What is Dense Associative Memory or Modern Hopfield Network❓ Our paper will be presented at next week. I want to highlight some of the main results here. Paper: Longer seminar: Thread 🧵1/N

    Прикажи овај низ
    Опозови
  2. је ретвитовао/ла
    1. окт
    Опозови
  3. 21. сеп

    Thanks for the clear explanation of what is happening in this video. The cliff protection is a rule-based system, subject to hardware constraints. There are no implications about AI (or robotics in general) from this video.

    Опозови
  4. 30. јул

    Office sweet office! First time in since Feb 2020.

    Опозови
  5. 15. јул

    If you are like me and are wondering about these questions, please check out my new preprint: It turns out Lagrangian functions from physics provide a convenient solution to these problems in multi-layer (or even fully connected) Hopfield Networks.

    Прикажи овај низ
    Опозови
  6. 15. јул

    How can we take advantage of the same degree of architectural modularity that is so important for building powerful feedforward networks, but with guarantees that the network with feedback will have a valid energy function and will converge to a fixed point attractor?

    Прикажи овај низ
    Опозови
  7. 15. јул

    One of the great features of deep learning is that we can easily stack multiple layers (e.g. dense, conv, attention) with arbitrary activation functions to build a useful feedforward network. Wouldn’t it be cool if we could do the same for Modern Hopfield Networks with feedback?

    Прикажи овај низ
    Опозови
  8. 7. мај

    Thanks so much for the invitation, it was fun to participate!

    Опозови
  9. 3. мај

    Last but not least, here are the people who contributed to this work: 8/N

    , , и 4 других
    Прикажи овај низ
    Опозови
  10. 3. мај

    What do individual Kenyon cells learn❓For instance, the sentence “Senate majority leader discussed the issue with the members of the committee” activates top 4 Kenyon cells that have the receptive fields shown below. Visit to poke individual neurons. 7/N

    , , и 5 других
    Прикажи овај низ
    Опозови
  11. 3. мај

    Our architecture makes it possible to generate the hash codes for individual words, and for words in a context. You can see the nearest neighbor words for context-dependent embeddings of the words "bank" and "apple". The network can find the correct meanings of these tokens. 6/N

    , , и 4 других
    Прикажи овај низ
    Опозови
  12. 3. мај

    How well does this work❓ We have compared our FlyVec embeddings with many methods of binarization of continuous word embedding (GloVe, word2vec). FlyVec demonstrates a strong performance across all the hash lengths. It works particularly well at small hash lengths. 5/N

    , , и 4 других
    Прикажи овај низ
    Опозови
  13. 3. мај

    In our work we reuse this core computational strategy performed by this network to extract correlations between words and their context from raw text. In biological terminology you can think about context as a “smell” and the word itself as a “visual input”, for example. 4/N

    , , и 4 других
    Прикажи овај низ
    Опозови
  14. 3. мај

    The inhibitory connections of the APL neuron make it possible to encode any input presented to this network as a sparse binary hash code in which the Kenyon cells that are strongly active are assigned the state 1, and the rest of the Kenyon cells are assigned the state 0. 3/N

    Прикажи овај низ
    Опозови
  15. 3. мај

    We study the network architecture that belongs to the part of the fruit fly brain called the mushroom body. The major input to this network comes from the olfactory system, but there are also inputs from the neurons that sense temperature, humidity and the visual inputs. 2/N

    , , и 4 других
    Прикажи овај низ
    Опозови
  16. 3. мај

    Although our paper on fruit fly inspired neural architectures for NLP has already been discussed on twitter, I want to cast a bit more technical light on it. Can a Fruit Fly Learn Word Embeddings❓ Paper: Demo: 🧵1/N

    Прикажи овај низ
    Опозови
  17. 29. апр

    And what about the conventional Hopfield nets with continuous variables❓Can they also be derived as a limiting case of Dense Associative Memory❓ Yes‼️ Please see the derivation in Appendix B here: 7/N

    Прикажи овај низ
    Опозови
  18. 29. апр

    Isn’t it true that Modern Hopfield Networks require many-body synapses, as illustrated in this slide, and for this reason are biologically implausible❓ No, they can be described using only pair-wise synapses between the neurons. Explained here: 6/N

    Прикажи овај низ
    Опозови
  19. 29. апр

    Modern Hopfield Networks are defined by the Lagrangian functions for the feature and memory neurons. Depending on these functions they can have neuron-wise activations, contrastive normalization (which reduces to softmax attention if applied once), or divisive normalization. 5/N

    Прикажи овај низ
    Опозови
  20. 29. апр

    What are the desirable properties of Modern Hopfield Networks❓ They can store a lot of memories, even an exponential (in the dimension of feature space) number of memories‼️ Proof is here: 1. 2. 3. 4/N

    Прикажи овај низ
    Опозови
  21. 29. апр

    Why is it important to have an energy function❓ General systems of non-linear equations can have many complicated behaviors. Because of the energy function, this is not the case for Hopfield nets - the dynamical trajectories always converge to fixed point attractor states. 3/N

    Прикажи овај низ
    Опозови

Учитавање ће потрајати.

Твитер је можда премашио капацитет или је тренутно наишао на проблем. Покушај поново или посети статус Твитера за више информација.

    Можда ће ти се свидети

    ·