Dear all,
I am trying to build a network where one of the connection weights changes according to a given function. I'm specifically looking for a step function, such that the weight would stay constant at a negative value until a given time point, where it would suddenly change to positive value. As far as I can tell, the existing synaptic models with plasticity cannot do this.
I'm trying to do this in order to model a rebound effect, where a neuron fires after being released form an inhibitory current. This effect takes place over a time scale of seconds in the circuit I'm studying, so using an existing model with built-in GABA-mediated rebound doesn't do the trick.
So is there a way to manually change a connection weight during the simulation? If not, is there some other way I could achieve the same effect in NEST?
Thanks in advance!
Best,
Ryan
Hi,
Can someone point me to the summary of changes other than git log so I can
understand better the changes introduced for the NEST-3 releases?
Thanks,
Itaru.
Dear all,
The *abstract submission deadline* for the (virtual) *NEST Conference
2021 **has been extended* to Wednesday, *26 May*. We are still looking
forward to your contributions!
The NEST Conference provides an opportunity for the NEST Community to
meet, exchange success stories, swap advice, learn about current
developments in and around NEST spiking network simulation and
its application.
This year's conference will again take place as a *virtual
conference* on *Monday/Tuesday 28/29 June 2021*followed by a virtual
NEST User Hackathon until Friday 2 July.
We are inviting contributions to the conference, including plenary
talks, "posters", breakout sessions and workshops on specific topics.
*Important dates*
*16 May*2021 — Deadline for NEST Initiative Membership applications
*26 May*2021 — Deadline for submission of contributions
*08 June*2021 — Notification of acceptance
*21 June*2021 — Registration deadline
*28 June*2021 — NEST Conference 2021 starts
For more information on how to submit your contribution, register and
participate, please visit the conference website
*https://nest-simulator.org/conference*
<https://nest-simulator.org/conference>
We are looking forward to seeing you all in June!
Hans Ekkehard Plesser, Dennis Terhorst, Anne Elfgen & many more
---------------------
* Registration fee for non-members: 50 CHF
Registration fee for NEST Initiative members: 20 CHF
Annual NI membership fee: 25 CHF
Hi Tom,
As a DIY workaround, you can use the RunManager context to simulate in small steps and break if it gets too slow. I haven't tested the code, just sketching from memory. Instead of calling nest.Simulate(1000), use
with nest.RunManager():
for _ in range(100):
t = time.time()
nest.Run(10)
if time.time() - t > 5:
break
The logic is as follows: You split the 1000 ms into 100 times 10 ms. This is fast with Run() within the RunManager(). You then use Python's time to see how long it takes to simulate 10 ms and break if it takes too long, here a 5 s limit. You can then use GetKernelStatus to get the current time in the simulation.
It would be interesting to add this as a kernel feature. Let me know if it works!
Best,
Hans Ekkehard
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser(a)nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home http://arken.nmbu.no/~plesser
On 06/05/2021, 16:08, "TOM BUGNON" <bugnon(a)wisc.edu<mailto:bugnon@wisc.edu>> wrote:
Hi all,
Under some circumstances simulations can slow down up to the point where the nest.Simulate() does not advance anymore and stays stuck at a given virtual time, with a "realtime factor" of 0. I suppose this can happen for instance when a network falls into a regime of runaway excitation in which a massive number of spikes are being exchanged.
I'm looking for a way to stop the simulation in such a case, (say when the realtime factor goes below a set threshold, or when the output files are not updated for a certain duration), ideally in such a way that the program can continue running rather than crashing. If anyone has a suggestion about how to go around this issue I'd be happy to hear it.
Thanks in advance! Best, Tom
Dear NEST Users & Developers!
I would like to invite you to our next fortnightly Open NEST Developer
Video Conference, today
Monday 10 May, 11.30-12.30 CEST (UTC+2).
As usual, in the Project team round, a contact person of each team will
give a short statement summarizing ongoing work in the team and
cross-cutting points that need discussion among the teams. The remainder
of the meeting we would go into a more in-depth discussion of topics
that came up on the mailing list or that are suggested by the teams.
Today we would have a deeper look on CI/testing in the current
GitHub-Actions context and STDP models in particular.
Agenda
Welcome
Review of NEST User Mailing List
Project team round
In-depth discussion
* Interactively debugging GitHub Actions
<https://github.com/nest/nest-simulator/actions> failures
* Discussion of STDP synapse unit testing #1840
<https://github.com/nest/nest-simulator/pull/1840>
The agenda for this meeting is also available online, see
https://github.com/nest/nest-simulator/wiki/2021-05-10-Open-NEST-Developer-…
Looking forward to seeing you soon!
best,
Dennis Terhorst
------------------
Log-in information
------------------
We use a virtual conference room provided by DFN (Deutsches Forschungsnetz).
You can use the web client to connect. We however encourage everyone to
use a headset for better audio quality or even a proper video
conferencing system (see below) or software when available.
Web client
* Visit https://conf.dfn.de/webapp/conference/97938800
* Enter your name and allow your browser to use camera and microphone
* The conference does not need a PIN to join, just click join and you're in.
In case you see a dfnconf logo and the phrase "Auf den
Meetingveranstalter warten", just be patient, the meeting host needs to
join first (a voice will tell you).
VC system/software
How to log in with a video conferencing system, depends on you VC system
or software.
- Using the H.323 protocol (eg Polycom): vc.dfn.net##97938800 or
194.95.240.2##97938800
- Using the SIP protocol:97938800@vc.dfn.de
- By telephone: +49-30-200-97938800
For those who do not have a video conference system or suitable
software, Polycom provides a pretty good free app for iOS and Android,
so you can join from your tablet (Polycom RealPresence Mobile, available
from AppStore/PlayStore). Note that firewalls may interfere with
videoconferencing in various and sometimes confusing ways.
For more technical information on logging in from various VC systems,
please see
http://vcc.zih.tu-dresden.de/index.php?linkid=1.1.3.4
Hi everyone!
I realized today that profiling PyNEST is easier than I thought. In iPython, you can just run
run -p -s cumulative -D srn.prof ../src/pynest/examples/store_restore_network.py
which will run the script, present a summary sorted by cumulative time and write binary profiling data (pstats format) to file srn.prof.
Then run (gprof2dot available from, e.g., PiPy)
gprof2dot -f pstats -o srn.dot srn.prof
and finally
dot -Tpdf srn.dot
Both tools have lot's of options. In my case (an older version of the script above, currently under review in #1919, not yet in master), the attached PDF resulted, showing that getting connection properties indeed takes a lot of time. Note that the graph only resolves time spent in Python code, time spent in C++ code is hiding behind "run()".
Below some more timing results from a network of 1000 neurons with 100,000 connections:
In [16]: %time c = nest.GetConnections()
CPU times: user 66.5 ms, sys: 8.09 ms, total: 74.6 ms
Wall time: 75.7 ms
In [17]: %time c = nest.GetConnections().weight
CPU times: user 869 ms, sys: 75.3 ms, total: 944 ms
Wall time: 955 ms
In [18]: %time c = nest.GetConnections().get("weight", output="pandas")
CPU times: user 1.69 s, sys: 186 ms, total: 1.88 s
Wall time: 1.9 s
Clearly, GetConnections() is quite fast, while reading out the weights costs. What maybe surprised me most is that turning the data as a Pandas DataFrame costs a whole second extra—I wonder if we do something suboptimal here.
Best,
Hans Ekkehard
--
Prof. Dr. Hans Ekkehard Plesser
Head, Department of Data Science
Faculty of Science and Technology
Norwegian University of Life Sciences
PO Box 5003, 1432 Aas, Norway
Phone +47 6723 1560
Email hans.ekkehard.plesser(a)nmbu.no<mailto:hans.ekkehard.plesser@nmbu.no>
Home http://arken.nmbu.no/~plesser
Hi all,
Under some circumstances simulations can slow down up to the point where the nest.Simulate() does not advance anymore and stays stuck at a given virtual time, with a "realtime factor" of 0. I suppose this can happen for instance when a network falls into a regime of runaway excitation in which a massive number of spikes are being exchanged.
I'm looking for a way to stop the simulation in such a case, (say when the realtime factor goes below a set threshold, or when the output files are not updated for a certain duration), ideally in such a way that the program can continue running rather than crashing. If anyone has a suggestion about how to go around this issue I'd be happy to hear it.
Thanks in advance! Best, Tom