Most recent "Intro To NCF" Presentation (accessible on harvard g-suite users)
General Center Questions
FASSE Cluster Questions
SLURM Topics: How to use the compute cluster
Basic UNIX Topics
- I tried to ssh to a login node and it gave me a scary message about the token or IP address changing, DNS Spoofing?
Quantitative QC: Extended BOLD QC via CBSCentral
Qualitative QC: Looking at your data
Preventive QC: at the scanner
Interactions with your Experiment
- Can I view dicoms on my computer similar to on the console -> or my dicom conversion went wrong, how can I verify its my conversion script and not my data?
Operating the Scanner
- Why does the scanner instruct me that the patient bed might move when I start the first scan in my session (usually a localizer)?
Basic MRI Physics and Protocol Questions
- What other scanning protocols are available in the center?
- Are there special sequences to improve/monitor data quality?
- It looks like I will need to use either partial Fourier or iPAT to get the spatial resolution and coverage that I want. Which method should I use?
ALL published work resulting from research studies fully or partially undertaken at the Center for Brain Science must include formal acknowledgement, using the standardized language below, of support from the Harvard Center for Brain Science. If the research involves data collected on the Prisma MRI scanner, the NIH Shared Instrumentation Grant must also be acknowledged.
Acknowledgements should also be included on any posters or presentations.
Please use the following language when acknowledging the Center and the NIH instrumentation grant in all publications.
“This research was carried out in whole or in part at the Harvard Center for Brain Science. This work involved the use of instrumentation supported by the NIH Shared Instrumentation Grant Program; specifically, grant number S10OD020039.”
If you used the SMS/multiband sequences, you should also cite them. Instructions for this can be found here.
For assistance with NCF related issues contact: email@example.com or call (617) 299-9724 and leave a message. Please provide as much detail as possible so that the Helpdesk staff can address your problem quickly. In addition, RC has office hours weekly at 38 Oxford St., behind the Northwest building, and some great documentation from their training sessions: https://www.rc.fas.harvard.edu/training/training-materials/
Questions about network, desktop or laptop support should be directed to Harvard University IT support (HUIT) at firstname.lastname@example.org or (617) 495-7777.
For questions about CBSCentral (XNAT) or the NRG tools (i.e. fcfast) contact the neuroinfomatics group at at email@example.com. You can visit their page to find out more about what they do.
For questions about setting up scan parameters or MR physics questions, talk to Ross at firstname.lastname@example.org.
For questions about experimental paradigm design, data analysis issues, or comments/questions about anything on the FAQ page, contact Jenn at email@example.com.
The FAS Secure Enviornment (FASSE) a central enabling infrastructure for neuroimaging teaching and research whose mission is to provide high performance, high power, robust, reliable and secure computer systems and human expertise to meet the challenges of neuroimaging research and teaching. The FASSE cluster is a collaboration between the Center for Brain Science and the FAS Division of Research Computing. It consists of a compute cluster with a vnc (fasseood) interface and several login nodes.
Note: We transitioned from NCF to FASSE on March 1, 2022. Many resources still refer to NCF and have not been updated to reflect this change yet. If you have any questions about these changes, you can contact Tim at firstname.lastname@example.org or RC at email@example.com
The FASSE cluster is a resource for the Harvard Neuroimaging Community and their collaborators. To access, just sign up for a user account indicating your laboratory and Principal Investigator.
There are several main uses for the FASSE cluster. With modern technology and advanced analysis techniques, datasets can be very large and can often fill up a users’ personal computer quickly. Therefore, the FASSE cluster provides a safe and secure location to store data. In addition, the data is backed up automatically and regularly, providing peace of mind.
Second, the FASSE cluster provides a compute cluster for running your data analysis on. This can be done in several ways, but mainly by logging in remotely to the cluster via a VNC session. This is where you can run graphical programs to look at your data, and where you can submit jobs to the cluster for number crunching. In addition, if you need both graphical abilities and number crunching abilities, you can use an interactive session.
To request a new account, fill out the online Account Request form. If you select a PI that is for FASSE, the last page should say that NCF (the old cluster) compute resource access has automatically been included. If for some reason it doesn't, please email RC (firstname.lastname@example.org) and let them know.
If you are an outside user, select Users without a Harvard Key (or with a key if you have one).
If you are working in bash, which is the default on the FASSE cluster, you should have a file in your user directory called .bashrc. You can open/create this file using your favorite text editor, such as sublime. Our current default bashrc can be found at https://ncfcode.rc.fas.harvard.edu/nrg/default_bashrc or directly on the cluster as /ncf/mri/01/users/shared/default_bashrc/bashrc.
We use two-factor authentication, which requires users to use their username and password as well as second factor, which is an openauth token, in order to gain entry to NCF infrastructure.
Getting your token
When you requested your account, you should have received an email with instructions for setting up the vpn. If you can't find this, you can go to https://docs.rc.fas.harvard.edu/kb/openauth/ for instructions. The instructions include two main ways to get this, either from your phone or from a desktop application. Keep in mind that this software creates a token that is specific to you, so if you use a different computer, you will need to set it up again, or use the token from your phone. Therefore, if you plan on VPNing to the cluster from multiple locations it is highly recommended to install the phone app also.
Tips for desktop app: Use the .dmg or .exe apps version.
Tips for the phone app: Use the Duo Mobile version, since this is the same thing you might have already installed for the regular Harvard VPN.
However you do it, you will need the 6 digit number it generates to log into the VPN. If you have a new Mac, you might get an error message when you launch the desktop app telling you to download the Java Developer Kit. This means you will need to install java to get the program to run. This can be downloaded from here.
For instructions written by RC, see here.
Getting the VPN client
Now that you have the openauth token generator, you will need the actual VPN client. These steps will only need to be followed once per machine you want to log in from, and this application is not specific to a user, so if you find it already installed on a machine you can use it if you have your own personal openauth token. It is also the same one you use to VPN more generally to Harvard, you just need to change the address as listed below. To get the VPN client (if you don't already have it) go to;https://vpn.rc.fas.harvard.edu/, for the username use username@fasse. Again, you will only have to do this step once, as this will install the Cisco AnyConnect VPN Client onto your system.
The web based installation of the client almost always fails. However, after it fails, it gives you the option to download the disk-image. Go ahead and do that. When you try and open the installer package, it may give you an error that it is broken and can't run. This is usually due to your security preferences in your web browser.
On a mac:
- Go to System Preferences -> Security & Privacy
- Click on the General tab
- Click on the lock to allow changes
- Under the heading "Allow applications downloaded from:" chose Anywhere
Then double lick on the vpn.pkg and the installation should work. Afterwards, you can go back and set the "Allow applications downloaded from:" to be Mac App Store and identified developers.
For instructions written by RC, see here.
Once you launch the VPN client, make sure the "Connect to" path name at the top is vpn.rc.fas.harvard.edu
Next you will be prompted for your username, make sure to follow it with your domain: @ncf i.e. jsmith@fasse, DON'T FORGET THE @fasse!! Next you will be prompted for your password and the Two-Step Verification Code. The password should be what ever you set your account up with, if you need to change it see here. The Verification Code field is for the 6 digit passcode from your openauth app. Once you enter this information, you should be able to connect.
Occasionally, you may have to try logging in once or twice, particularly if the token is getting ready to expire right as you put it in. Once you are connected, you want to start a VNC session by following these directions.
There are several choices to be made for your remote desktop.
First, you can choose the FAS-RC Remote Desktop or the Containerized FAS-RC Remote Desktop. You must choose the FAS-RC Remote Desktop (NOT containerized) if you plan to use Docker/Singularity containers during your session.
You can also choose three choices of partitions:
- fasse_gpu, and
Which you choose depends on what you want to do, but when in doubt, use test. Also, keep in mind that you can directly ssh to the cluster via fasselogin if you don't like the remote desktop option, or if it is full or down for any reason. You might also want to read the FAQ on submitting your jobs, because the descriptions below refer to the different queues.
test: These are limited to 8 hours, so works well for a remote desktop session.
fasse: This is just like test, but has a 7 day time limit.
fasse_bigmem and ncf_bigmem: This is like using srun to ncf_bigmem. It is for the rare case where you need a bunch of memory (greater than 30 gigs) but also need a graphics window. In general, given the different hardware of bigmem (which is also a queue you can submit jobs to via sbatch), the numbers returned could be slightly different than the ncf_holy queue. fasse_bigmem has 6 nodes with 500 gigs, and ncf_bigmem has 1 node with 3 TB. Note that ncf_bigmem batch jobs will fail if you request < 30 gigs.
fasse_gpu: This one includes 4 V100 GPUs.
remoteviz: This one is for graphics intensive work, such as looking at your brain data and possibly editing scripts if you like the remote desktop. If you do any number crunching on here, which should generally not be done, the numbers could be slightly different than running it via sbatch to fasse_holy. If you are doing super intensive graphics like 3D brains, you should see the first tip below.
[compute is no longer available on fasse]
This is like using srun to launch an interactive session on ncf_interact. It actually opens up a job on a compute node. You can also request more memory for this session than the VDI desktop. It is for people who want to crunch numbers but also need a graphics window. Examples are running Matlab or SPM where you need the windows, or for testing a script before sbatching it on all your subjects. The numbers returned will be the same as submitting your script via sbatch. If you do not need graphics, please make sure you are submitting your scripts via the command line.
If the remote desktop seems slow, the biggest thing you can do to speed things up is GET A WIRED CONNECTION!!! The wireless network is much slower, and while convenient, is not great for remotely working with the cluster. We also now have the ability to take advantage of virtual GL which speeds up graphic intensive programs. To connect via ethernet you need to register your computer: nice instructions from SEAS.
Open On-demand Remote Desktop (VNC)
1. From Chrome (preferred) or Firefox while on the VPN, go to: https://fasseood.rc.fas.harvard.edu and login with your RC credentials.
Research Computing has written a nice FAQ on their website about this: https://www.rc.fas.harvard.edu/resources/documentation/ncf-vdi-apps/ [fasse update pending] This covers the interactive apps, including the remote desktop session, Jupyter notebook, and Rstudio.
For a video of the training session, see here
For info on the other tabs available on the site see their FAQ, which was written for the general Odyssey cluster, though the general features are the same: https://www.rc.fas.harvard.edu/resources/documentation/virtual-desktop/. Scroll down to where it says: Quick Tour of the Dashboard.
Graphics intensive programs: If you are running a graphics intensive program like Freeview or FSLeyes, you can vglrun before the name of the process:
vglrun freeview -v mysubj/mri/T1.mgz
Note that vglrun MUST be used with connectome workbench(wb_view) or wb_view will not work!
Copy and Paste between local machine: This can be done via the clipboard, on the far right. To open it, click the little arrow on the left side of the desktop.
Then click on the image that looks like a clipboard. The window it opens is a place to copy and paste things back and forth, always with your laptops copy/paste commands. To get something from the cluster to show up there, highlight and middle click, or copy with your remote desktop copy commands.
Text editor: The default text editor is Visual Studio Code (VSCode), you will see it as an icon. EMacs is also available via the Applications drop-down window in the top-left corner.
How to make text look nicer: To make the text look a little 'smoother'. From within your remote desktop session, go to the top left corner to Applications ->Settings -> Appearance -> Fonts and change Hinting to None.
Edit keyboard shortcuts To change the command combinations for copy and paste: From within your remote desktop session, go to the top left corner to Applications ->Settings -> Appearance -> Settings and check Enable editable accelerators. Then go to a terminal window, click on Edit, and hover over Copy or Paste with your mouse. Then perform the key combination you would like to use. You should see the key combination listed on the right change. For reference, you can use ctrl-c for copy, and the system will smartly figure out when you want to copy something or kill a running process, for which the UNIX command is ctrl-c.
How to get your terminal windows (and directories) back when you start a new session:
The key is to 'log out' from the session you want to save. This won't save your matlab session or reload modules, but it will open all your terminal windows back up in the same directory they were when you logged out. To logout:
Go to the top menu bar Applications-> Log out, its all the way at the bottom in green.
Then click on the green logout button that appears.
Here is the material from the training sessions we had for the upgrade. We broke them down into three topics. Feel free to reach out if you have any questions, Harris: email@example.com.
Here is an overview of the upgrade changes and why this is important for your data analysis stream. Probably only useful for people that were using the old CentOS6 cluster and have to deal with changes to an existing analysis pipeline: https://drive.google.com/open?id=1w5b0_DyUKlHYYNS-qtGdT0Vz7grCpoEl
On running in a container for CentOS6: https://drive.google.com/open?id=1me36iwJnWbQa_FuRxb2WkM2_RwBdn5eb Also see the FAQ below
Video for using the new Open On-demand system: https://drive.google.com/open?id=1hu8nlwiHtawDHH03C4i0PQ4KdXAk-r2M Also see the FAQ.
To ssh to the cluster, you must be on the VPN. You can then ssh to a login node. From a mac you can use Terminal. From a PC see here. This is a great place to submit scripts from, or edit files, but you shouldn't be doing any number crunching. Especially as the hardware differs from the cluster, so if you do number crunching here and on the cluster the numbers could differ slightly.
You will prompted first for you password, and then for your "verification code" (aka the 2 factor authentication code via Duo Mobile).
The hostname "fasselogin" will redirect to one of several specific nodes, e.g., fasselogin2, whichever is least busy. You can also choose a specific one if you prefer.
You can also access a shell from the VNC server site, on the cluster tab.
Also keep in mind, if you need to do an analysis (aka number crunching) but also need graphics or if you just like a nice graphical interface you can use a remote desktop.
The process is the same if you want to use the Open On-Demand remote desktop client. If you want to ssh to one of the login nodes like described here. To enable X11 forwarding in PuTTY you need to configure your settings.
RC has some nice instructions here, see below for some basic steps.
Start up PuTTY. In the configuration box you need to give it the Host Name of the vncserver. The hostname should be fasselogin.rc.fas.harvard.edu for centOS7. You can also save these settings by giving your session a name in the Saved Sessions box and then click Save. You should save it again once you set the X11 forwarding.
Next, under the Connection Category, expand SSH (click where red arrow pointing) and choose X11. Click on the Enable X11 Forwarding checkbox (where red arrow pointing).
Then, click on the Session Category, and save your settings again. Next time when you open this program, you can Load the saved settings.
Finally, click on Open. You should then be prompted for your username and password. When you are done, make sure you end your SSH session by typing exit at the command prompt before you X out of the PuTTY program.
We use the lmod module system on the NCF. For some basic instructions from RC see here. If you don't see something you want, there are a couple of options for getting it. If you think it might be of use to others, you can email RC (firstname.lastname@example.org) and ask for it. If you think others wouldn't be interested, you are always welcome to download software yourself. RC has a page on this, but keep in mind it was written for Odyssey. This page also has links for how to add packages to R and python.
There are two main places software might be depending on who installed it.
To see a full list of their software, and which version of CentOS they work with, check out their software portal.
To see the NCF specific ones, you need to first load the ncf modules. This command should already be in your .bashrc.
module load ncf
To see the modules available to us:
module avail ncf
If this hangs, use Ctrl-C to interrupt and it should output normally. This will show a list that looks something like:
To select a module, for instance, a newer freesurfer:
module load freesurfer/6.0.0-ncf
If there is a module missing that you would like, for instance a matlab version, just email RC (email@example.com).
See the instructions below for setting SPM, as there is an additional step.
Officially Supported NCF/NRG modules:
Not all the modules we have available are under active support and maintenance. This is a list of the highest-priority modules we actively maintain. If you have a problem with any of these modules, or would like an updated version, please let us know and we will fix it. Please also let us know if you have problems with other ncf modules, but we cannot guarantee that we can provide support for all of them.
Legacy list_loader notice:
The old list_loader system is no longer supported. The software packages that these commands would add to your path are still accessible, so that we can extend the usable life of legacy software. You may be used to some program such as dcm2nii automatically being available for you on login due to a line in your centOS6 bashrc such as . /ncf/tools/current/code/bin/env_setup.sh or load_mricron 2009_12. If you want to access dcm2niix, the newest version of dcm2nii, you can load it through the module mricrogl/2017_07_14-ncf. If you still need to use older software, we recommend that you hard-code the binary's location in your scripts (i.e. /ncf/tools/current/apps/arch/linux_x86_64/mricron/2012_12/dcm2nii). If there is no available module for the software you need, and you are unable to install it yourself, please contact firstname.lastname@example.org.
With the new On-Demand remote desktop, you have the option of launching an interactive Jupyter notebook or an RStudio session. You can see Research Computings documentation on this, scroll towards the bottom of the page: https://www.rc.fas.harvard.edu/resources/documentation/vdi-apps/
To set up your python environment, you can either do it before hand by creating different conda environments: https://www.rc.fas.harvard.edu/resources/documentation/software-on-odyssey/python/, or from within the Jupyter notebook as described in the vdi-apps page above.
They also have some specific instructions for setting up Tensor Flow: https://www.rc.fas.harvard.edu/tensorflow-on-odyssey/. For other potential Python pages, you can use their search bar towards the top of any of the documentation pages above.
For information on setting up your R environment, see: https://www.rc.fas.harvard.edu/resources/documentation/software-on-odyssey/r/
Work and data storage space are allocated to the Principal Investigator of each laboratory. Within a lab, the PI is responsible for dividing up space, so see your PI for space requests. In addition, each user will have 20 GB of space in their home directory. Please do not use the home directory for data storage - it will run out fast! And definitely don't run jobs from your home directory as it can slow everyone's account down. In that case your account could be deactivated.
Initially, your permissions on the cluster are set up so that new directories and files are readable and writable to you, and readable by your group, and by the world (in this case, everyone with access to our cluster). You can tell what the permissions of a file are by typing ls -l. You will see something like this:
For a file:
-rw-r--r-- 1 mcmains cnl 54 Oct 5 2012 file_name
For a directory:
drwxr-xr-x 1 mcmains cnl 54 Oct 5 2012 directory_name
For a script: -rwxr-xr-x 1 mcmains cnl 54 Oct 5 2012 script_name
The first slot is either - for a file or d for a directory. The next three slots designate the owner's permissions (r=readable, w=writable, x=executable). The next three are the permissions for the group, and the last three are everyone else. Following this is the user name of the owner (mcmains) and then the owner's group (cnl).
Often, you will share files with other people in your group. In this case, you might want other members of your group to be able to read and write (edit) your files. In addition, you may not want other's on the cluster to be able to read your files. The default permissions can be set by adding a line to your .bashrc file. What you add depends on the permissions you want.
The command is umask followed by three numbers that designate the permissions separately for the three designations (users, group, everyone else).
0=rwx 1=rw 3=r 7=nothing
So if you wanted to make your files readable, writable, and executable by you and your group, but no one else, you would add the line:
to your .bashrc file. For more information on permissions, including how to change them for just a single file or directory of files, see: https://en.wikipedia.org/wiki/Chmod
This can be done in two main ways, either via the command line or via a GUI. Both have two different ways depending on how much data you are trying to transfer.
1. Command line via rsync:
This command is great for a large amount of data or big files, as it is more of a sync than a copy. This means if the transfer gets interrupted for any reason, you can relaunch it and it will pick up where it left off. Launch this from a terminal window. RC has a page for this with more details. I will show a simple command below, since their instructions are for Odyssey.
rsync -av email@example.com:directory_or_file_you_want_to_transfer .
You can also transfer to specific location on your computer by replacing the ending "." with a directory path.
2. Command line via sftp:
This is good for small to mid size files. If you have a large directory, or very large files you should use rsync. Start an sftp session from your laptop. To do this you will need to have a VPN session going, and a terminal window open:
You will be prompted for your password. It is important to note that it is easiest to start the sftp session from the directory that contains the files you want to put on FASSE or from the directory where you want files from FASSE to be placed, as the starting directory is the default directory and ‘tab completion’ doesn’t seem to work (at least not for me).
If you want to PUT files from your laptop to FASSE, go to the directory on FASSE where you want the files to end up and then type:
If you want to get files from the FASSE, go to the directory where the files are (cd, ls, and most unix commands should work). Then:
You can put or get multiple files using the * character to select multiple files.
3. GUI based via Open On-demand:
See the instructions on the Open On-demand, and go to the Files tab at the top. This is good for individual files that aren't too large.
4. GUI based via Filezilla:
You can also use a program to link FASSE to your computer so that you can surf through the directory structures easily on both, and just drag and drop files or folders that you want.
There are a few free downloadable choices. Research computing has recently changed their recommendation from Cyberduck, to Filezilla. They have made nice instructions found here . We strongly recommend using Filezilla as a graphical SFTP client, as it will be supported by RC going forward. You can follow their instructions, except when you specify the host. You want the host to be fasselogin.rc.fas.harvard.edu
Note that you will be prompted for both your password and a "verification code" (i.e. your two-factor authentication code via Duo Mobile)
There are several ways to ‘read’ files on the cluster.
1. If you just want a quick view of the contents without opening up the file you can use ‘more’:
Press the space bar to advance the document, and ‘q’ to quit.
2. If you want to open the file up for editing or easier viewing from ncfood (the VNC remote desktop) use sublime.
subl filename &
This will open up a nice GUI based text editors that has all the bells and whistles like find/replace, go to line, and such. The & sign means to put the text editor in the background so that you can still type and do things in the terminal window you launched sublime from.
If you want to edit from one of the login nodes (ncflogin.rc.fas.harvard.edu), use gedit:
gedit filename &
3. On your Mac, you also have several options. The one I recommend is Text Wrangler, which is now called BBEdit. Make sure to install the command line editor. You can use this to open any text file. From the Finder, by right clicking, selecting ‘Open With’, and then BBEdit. The main reasons I recommend BBEdit are it has a powerful find/replace function and has a unix friendly newline character. In addition, it has a command line tool, so that when you are in a terminal, you can open things right there, just like sublime on the cluster, via:
It will also let you edit ‘protected’ files by prompting you for a username and password, where other programs like the default text editor will often just deny you permission.
To reset or change your password, use the Research Computing website https://portal.rc.fas.harvard.edu/pwreset/ .
The openauth token code required for the two-factor authentication is specific to your username. Therefore, if you want to VPN from a different machine, you will have to set this up again. An alternative is to get the openauth app for your iphone, blackberry, or android. The instructions for doing so are here.
From ncfood or an SSH session, type:
df –h directory_you_want_to_check
From fasseood or an SSH session, go to the directory that contains all of your dicoms. Type:
tar -czvf name.tar.gz what_you_want_to_tar
For instance: tar -czvf mydicoms.tar.gz subjid* or tar -czvf mydicoms.tar.gz *.dcm
To uncompress them: tar -xzvf name.tar.gz
FASSE system users will have access the Internet while connected to the FASSE VPN, with any compatible web browser that will get an automated proxy configuration. For most browsers you should not have to do anything. You should also have access to your email. If this is not true please contact firstname.lastname@example.org. There maybe some issues with the VPN if you have to access other remote secure networks simultaneously. In that case, please contact email@example.com. You should also be able to access the internet from the remote desktop interface. If you launch firefox and get the message it is already running, see here If you experience any trouble accessing the internet, please contact firstname.lastname@example.org.
You can change your SPM version similar to any other brain analysis software using the tools described above. We recommend using module load spm/12.7487-fasrc01. However, you also need to add the SPM version to your matlab path. You can do this by using one of the variables set in the default unix environment (see here), $_HVD_SPM_DIR. To check what that variable is set to, type:
Remember, if you want to use a different version of SPM on future logins and in different terminals, you must add it to your .bashrc file. Once you have the SPM you want you can add this variable to your ~/matlab/startup.m file (See here for editing files).
This will prepend SPM and its subdirectories to your matlab path, which means it will supersede any version you had previously set in your path. You should be aware that if you make changes to your path in matlab and then save it, this version of spm will become part of your 'permanent' path, as it was added on the fly at startup, but then you just saved the whole current thing. This can become a problem if you are swiches versions back and forth. This is also true of Freesurfer, who likes to add things to your matlab path, so make sure to check this occasionally and make sure mutliple things aren't added (via file - set path in matlab).
A collaboration with IQSS (Alex Storer), a previous graduate student, Caitlin Carey, and our local Tim O'Keefe have resulted in a set of python scripts to help you with your SPM analysis. They are designed to take a batch script you create for a single subject, and apply it to a bunch of subjects. They start by downloading your data from CBSCentral. So even if you don't want to use them for analysis, you might want to use them for grabbing your data and converting to SPM niftis. The instructions for using them are located on the ncf GIT site. The scripts do not have to be downloaded, but you do have to load the module for them. In your script, or in your terminal window, you will need to have the following commands before you can use or even see them.
module load ncf
module load spmtools-centos7/1.0-ncf
You should load what ever the latest tag is, as master is the code that gets updated.
The instructions for use are located here: https://ncfcode.rc.fas.harvard.edu/mcmains/spmtools-centos7/blob/master/README.md
Essentially, they replace the path information in your batch script with that of the new subject. You can easily check the new batch created for correctness, and make an additional specialized updates you want. They have been tested with basic preprocessing (including SPM12 where you can include slice timing information for SMS/multiband scans), through your first level analysis.
Bugs are probably because you mistyped something, have the wrong modules loaded, or have something wrong in a configuration file, so check those through first or find a colleague to sanity check. If that doesn't turn up anything, write an email to email@example.com. If you have a specific error, please include that error, along with the output of 'module load' if on the FASSE cluster, your username, and the machine you are working on. Text is preferable to images. The faster I can replicate your issue, the faster it will be solved. this stackoverflow guide is a great framework to follow. For questions about specific software packages such as freesurfer or redcap, the freesurfer mailing list and redcap community support portal, respectively, are the best go-to option. The one caveat is if programs refuse to load or give library errors -- in that case please contact firstname.lastname@example.org.
Please report any performance issues to rc help at email@example.com , and make sure to CC firstname.lastname@example.org. Please make your problems our problems -- the better we know what annoys you, the better we can fix or ameliorate it.
Please report any issues with usernames, passwords, or group membership to email@example.com .
The reason this happens is because Firefox uses one profile by default, and this profile gets “locked” when in use. To resolve this issues, when logged into the ncf, open a terminal session and type in the command line:
A “Firefox – Choose your profile ” dialogue box will come up. Click on “Profile” -> “next” -> fill in a description, such as “second”, then click “finish” Then highlight the new profile, by clicking on it once with the mouse, and click on “Start firefox” Firefox should start up now.
There are a few possibilities:
1) Please make sure that you are either typing the full path, or you have the module loaded. To check if an application is in your path, type:
If that doesn't return a path for the application, then make sure you have the module loaded, or specify the whole path if it is something you installed yourself. If it does return the path, see the other possibilities below.
2) Please verify that your home directory has not reached it’s max quota. Go here for instructions on checking your space.
3) If you are remotely logged in, the problem could involve X11 forwarding. On the Mac, make sure that you used the –X or –Y flag when sshing (example). On the PC if you are using Putty, there should be an option to check ‘Enable X11 Forwarding.
4) If you are still having issues, email firstname.lastname@example.org.
I tried to ssh to a login node and it gave me a scary message about the token or IP address changing, DNS Spoofing?
RC actually has a pretty good FAQ for this,which you can check out for up to date instructions. The error message looks something like:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: POSSIBLE DNS SPOOFING DETECTED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ The RSA host key for ncfws01.rc.fas.harvard.edu has changed, and the key for the corresponding IP address 10.242.38.51 has a different value. This could either mean that DNS SPOOFING is happening or the IP address for the host and its host key have changed at the same time. Offending key for IP in /Users/stephanie/.ssh/known_hosts:8 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that the RSA host key has just been changed. The fingerprint for the RSA key sent by the remote host is ee:bf:41:7d:3f:00:29:1a:8f:05:99:b2:51:28:a0:93. Please contact your system administrator. Add correct host key in /Users/stephanie/.ssh/known_hosts to get rid of this message. Offending key in /Users/stephanie/.ssh/known_hosts:16 RSA host key for ncfws01.rc.fas.harvard.edu has changed and you have requested strict checking. Host key verification failed.
To fix it, you need to edit the file it tells you: /Users/username/.ssh/known_hosts.
For instance, if you got this message when trying to ssh to ncfws01.rs.fas.harvard.edu (which doesn't exist anymore!!), you would open up the requested file and delete the line containing information for that login node (highlighted in gray below).