This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
brainchip is undoubtedly one of the most important commercial players in the neuromorphic industry. despite this thare are no good secondary sources on brainchip hence why wikipedia has no article on it. i tried to create a brainchip article myself but was rejected 4-5 times. it's noteworthy to mention that they have a social media following and are listed on the asx stock exchange in Australia (which was the place of their founding) as well as the pink market in the u.s.a making it the only publicly traded neuromorphic companey. however no top media outlet has written an in depth article on brainchip since thay have yet to reach commercial sucess.
i deleted all information related to brainchip from the spiking neural network article since it was unsourced and that is the wikipedia policy.
The previous version was in my opinion bias towards one particular source. SNN have been modeled before mid 1990s and not necessarily by dissatisfaction with ANN. Researchers go into new fields also, and mostly, by simply curiosity. Also, we cannot dismiss researchers studying biophysics and infer that they are not, or were not, interested in information theory questions. The quotation of that chapter is now edited in the paragraph. The article needs more work, so let's get to it. (SoyYo (talk) 09:28, 22 July 2008 (UTC))[reply]
Anyone willing to append with the latest developments like these: http://arxiv.org/vc/arxiv/papers/1206/1206.3227v1.pdf?VZakharov (talk) 13:14, 4 December 2012 (UTC)[reply]
/?country=5.4&aid=fb6efa94d32e6af1&rand=c0a4b0ee-adf4-e8926c7a0079 — Preceding unsigned comment added by 2.188.106.63 (talk) 17:38, 6 June 2015 (UTC)[reply]
I'm not sure about the accuracy of this; Various coding methods exist for interpreting the outgoing spike train as a real-value number, either relying on the frequency of spikes, or the timing between spikes, to encode information.
Usually the spiking mechanism is interpreted as a stochastic generator emitting a binary value, with an increasing likelihood of firing a spike as the threshold is reached, which may or may not be smoothed to produce real values. Jeblad (talk) 22:55, 28 July 2018 (UTC)[reply]
Tongjinao, I appreciate all the effort you've put in for this page, but your writing is somewhat unclear, and citing Chinese language blog sites doesn't help. For example, what does "Moreover, there is a hardware platform from Intel approving SNN." in the Hardware section mean? I can clean up the grammar, but not if I don't understand what the content is. — Preceding unsigned comment added by Justin Mauger (talk • contribs) 03:48, 28 February 2019 (UTC)[reply]
It sounds like he was refering to Intels Loihi neuromrphic computing devices. Im suprised there was not a reference to them. 192.55.54.43 (talk) 18:15, 17 July 2019 (UTC)[reply]
Two paragraphs talk about pulse training, but it is not clear what exactly a pulse training is. Is it a synonym for spike train? Is it what it's called when you train a SNN? Wotanii (talk) 14:03, 11 November 2019 (UTC)[reply]
In the first sentence the articles says "Spiking neural networks (SNNs) are artificial neural networks [...]", but later the article says "As of 2019 SNNs lag ANNs in terms of accuracy" suggesting, that SNNs are not ANNs. This is a contradiction.
Also while it's explicitly stated, that "SNN" means "Spiking neural network", it's never explained that "ANN" means "artificial neural network".
At multiple points the article refers to "*traditional* ANN". It's never explained that these refer to ANNs with neurons with continuous outputs. Also I find the word "traditional" misleading here since the idea of artificial spiking neural networks is arguably older than the perception. I don't know if there is a better term that refers to non-spiking neural networks though. ("perceptron-based neural network" maybe?)
edit: also later the term "second generation neural network" is used to refer to traditional ANNs.
I suggest the following changes:
Wotanii (talk) 14:27, 11 November 2019 (UTC)[reply]
Scientists found that although Spiking Neural Networks could learn to identify the data it was trained to look for, when such training went uninterrupted long enough, its neurons began to continuously fire no matter what signals they received. Watkins recalled that "almost in desperation," they tried having the simulation essentially undergo deep sleep. They exposed it to cycles of oscillating noise, roughly corresponding to the slow brain waves seen in deep sleep, which restored the simulation to stability. The researchers suggest this simulation of slow-wave sleep may help "prevent neurons from hallucinating the features they're looking for in random noise," Watkins said.
J mareeswaran (talk) 18:54, 19 May 2020 (UTC)[reply]
The provided reference does not explain why the claim is true, nor it points to additional references such as research papers. In my opinion, this claim requires additional references (and I think the reference provided is not adequate and should be removed). I have now replaced the previous reference to a blog post to the scientific article by Wolfgang Maass published on Neural Networks. — Preceding unsigned comment added by 79.54.170.6 (talk) 11:18, 7 March 2021 (UTC)[reply]
Please, don't remove external links. This is convenient to have access from the one place. MrOllie, please don't do that. — Preceding unsigned comment added by 80.92.31.29 (talk) 21:05, 6 September 2023 (UTC)[reply]
Been following recent research regarding Spiking Neural Networks. In between achieving high biological plausibility and computational efficiency SFA based neurons shown improvement solving few of the problems. Please feel free to edit any text or references. 169.231.72.200 (talk) 09:48, 14 February 2024 (UTC)[reply]
May I suggest that the name of the article be changed to Artificial spiking neural networks, to disambiguate from other types of spiking networks such as biological networks. EvilxFish (talk) 08:03, 11 April 2024 (UTC)[reply]