Andrej Studen/Merlin пре 4 година
родитељ
комит
3dec2cb5da
25 измењених фајлова са 1441 додато и 369 уклоњено
  1. 82 2
      README.md
  2. 0 0
      Resources/.gitkeep
  3. 37 13
      pythonScripts/addSegmentations.py
  4. 0 83
      pythonScripts/linkOrthanc.py
  5. 87 18
      pythonScripts/populateImagingFromOrthanc.py
  6. 143 129
      pythonScripts/preprocess.py
  7. 124 0
      pythonScripts/runSegmentation.py
  8. 0 96
      pythonScripts/scanOrthanc.py
  9. 133 0
      segmentation/model/modelConfig.cfg
  10. 133 0
      segmentation/model/modelConfig.cfg.template
  11. BIN
      segmentation/saved_models/DM_defaults.DM_train_VISCERAL16_Fold1.final.2019-10-01.07.46.19.932616.model.ckpt.data-00000-of-00001
  12. BIN
      segmentation/saved_models/DM_defaults.DM_train_VISCERAL16_Fold1.final.2019-10-01.07.46.19.932616.model.ckpt.index
  13. BIN
      segmentation/saved_models/DM_defaults.DM_train_qtii_LABELMASKS4.final.2020-10-31.05.59.36.425298.model.ckpt.data-00000-of-00001
  14. BIN
      segmentation/saved_models/DM_defaults.DM_train_qtii_LABELMASKS4.final.2020-10-31.05.59.36.425298.model.ckpt.index
  15. 6 0
      segmentation/saved_models/INFO_ABOUT_MODELS.txt
  16. 2 0
      segmentation/test/testChannels_CT.cfg
  17. 1 0
      segmentation/test/testChannels_CT.cfg.template
  18. 58 0
      segmentation/test/testConfig.cfg
  19. 59 0
      segmentation/test/testConfig.cfg.template
  20. 2 0
      segmentation/test/testNamesOfPredictions.cfg
  21. 1 0
      segmentation/test/testNamesOfPredictions.cfg.template
  22. 2 0
      segmentation/test/testRoiMasks.cfg
  23. 1 0
      segmentation/test/testRoiMasks.cfg.template
  24. 504 28
      slicerModule/iraemmBrowser.py
  25. 66 0
      templates/segmentation.json.sample

+ 82 - 2
README.md

@@ -2,6 +2,82 @@
 
 Manage images and data related to irAE project.
 
+# Slicer module
+A Slicer module was created to assist Radiology and Nuclear Medicine phsicians in
+reviewing the images. The following lists the installation, setup and usage of the module.
+
+## Installation
+
+Here is the installation [video][iraeMMInstallation] that shows required steps.
+
+Download the [code][iraemm] and [dependencies][SlicerLabkeyExtension]. Unzip. To
+have Slicer know where the files are, open Slicer, and under Edit->Application settings 
+select Modules section. Under Paths, click on Add and navigate to newly unzipped
+directories. We need `labkeyBrowser` and `DICOMtools` from `SlcierLabkeyExtension` and 
+`slicerModule` from `iraeMM` code. After clicking `OK`, Slicer will and has to
+be restarted.
+
+## Setup
+To access LabKey, the Slicer tools must be configured. Do that by selecting LabKey->labkeyBrowser
+module from module list and fill appropriate fields.
+
+### Onko-nix
+For accessing OIL internal site, the settings are:
+- Server: `http://onko-nix.onko-i.si:8080
+- Labkey username: username, given at the LabKey site, typically your email
+- Labkey password: password, generated when accessing LabKey site
+
+The rest needn't be changed. 
+
+### Setup verification and storage
+Once the data is entered, click on Init to check whether LabKey can be accessed. If
+the button turns green, you are OK. Do `Save configuration`. 
+
+## Usage
+
+
+See [video][iraeMMWorkflow] of module use, illustrating steps below. 
+Old version [here][iraeMMWorkflowOld].
+
+Use Labkey->iraemmBrowser module. The `Patients` section lets you select the patient
+and corresponding visit. On `Load` the data gets loaded from the server. 
+
+### Converting segmentations
+The segmentations on server are stored as label maps, which have to be 
+converted to segments for Slicer. To do that, select `Segmentations` module 
+from the drop-down menu. Under Active segmentations, a new segmentation must
+be created by selecting `Create new segmentation` option from the pull down menu. Scroll down 
+to `Export/import models and labelmaps` and change the mode to `Import` by 
+moving the radio button selection. The Input type should be set to `labelmap`. Input
+node should match selected patient/visit pair and should end in `Segm`. Click on `Import` 
+button further down. 
+
+### Removing labelmap volume
+The source labelmap volume will obscure other volumes, so we should delete it. Do that
+by selecting `Volumes` module and rotating˛`Active volume` to point to labelmap
+used in segmentation creation. Once selected, select `Delete current volume` from
+the same pull down menu next to Active volume.
+
+### Viewing segmentations
+Labelmap was converted to a set of Segments, which are listed in the `Segmentations` 
+module. By clicking on the open/closed eye icon, a segment can be made visible or invisible
+on the windows. Further setup can be made in the `View controls` module, where
+visibility and mode of each volume can be adjusted. For the segmentations, which
+appear in the top layer, either continuous, continuous with sharpened edges or edge 
+only mode are available by clicking on display icon.
+
+## Entering review
+The module has a review section. Select LabKey->iraemmBrowser and navigate to review
+secion. Four levels of agreement can be selected and an addition Comments field is 
+available. Once filled, review is submitted by clicking on Submit button. Choices can be
+changed any time and new values can be pushed to database with further clicks of
+the button. 
+
+## Clearing
+Once done, a patient should be cleared to minimize interference in segmentation 
+evaluation. Do that by pressing `Clear` in `Patients` section of the `iraeMMBrowser` module. 
+
+
 ### Dependencies 
 To access LabKey, the [python API][labkeyInterface] was used.
 Anonymization and NIfTI conversion are based on phenomenal [nibabel][] tools. 
@@ -12,8 +88,12 @@ Data storage is provided by [Orthanc][] with associated [interface][orthancInter
 Anonymization must be run as a `tomcat8` user for access to data files. Check setup 
 in the `anonymization.py` and run it with `runPython.sh anonymization.py`. 
 
-
+[SlicerLabkeyExtension]:http://wiscigt.powertheword.com/labkey/SlicerLabkeyExtension/-/archive/SlicerExtensionIndex/SlicerLabkeyExtension-SlicerExtensionIndex.zip
+[iraemm]: http://wiscigt.powertheword.com/oil/iraemm/-/archive/master/iraemm-master.zip
 [nibabel]: https://nipy.org/nibabel/gettingstarted.html
 [labkeyInterface]: http://wiscigt.powertheword.com/andrej.studen/labkeyInterface
 [Orthanc]:https://www.orthanc-server.com
-[orthancInterface]: http://wiscigt.powertheword.com/andrej.studen/orthancinterface
+[orthancInterface]: http://wiscigt.powertheword.com/andrej.studen/orthancinterface
+[iraeMMWorkflowOld]: https://med1.fmf.uni-lj.si/owncloud/index.php/s/wrKOo1iUzgePzTi
+[iraeMMWorkflow]: https://med1.fmf.uni-lj.si/owncloud/index.php/s/iEETqswhTjlI2hV
+[iraeMMInstallation]: https://med1.fmf.uni-lj.si/owncloud/index.php/s/pAe4NBONHOGgEYO

+ 0 - 0
Resources/.gitkeep


+ 37 - 13
pythonScripts/addSegmentations.py

@@ -6,6 +6,14 @@ import nibabel
 import shutil
 import sys
 
+if len(sys.argv)<2:
+    print("Usage {} sourceDir version(v1 or similar)".format(sys.argv[0]))
+    sys.exit(0)
+
+sourceDir=sys.argv[1]
+ver=sys.argv[2]
+
+shome=os.path.expanduser('~nixUser')
 fhome=os.path.expanduser('~')
 with open(os.path.join(fhome,".labkey","setup.json")) as f:
     setup=json.load(f)
@@ -32,18 +40,22 @@ ds=db.selectRows(project,'study',dataset,[])
 #imageSelector={"CT":"CT","PET":"PETWB"};
 imageResampledField={"Segm":"Segmentation"}
 
+participantField='PatientId'
+#for prosepective
+#participantField='ParticipantId'
+
 #projectNIfTIBase=os.path.join(labkeyBase,'files',project,'@files/nifti')
 #use webdav to transfer file (even though it is localhost)
 
 
-def getPatientLabel(row):
-    return row['PatientId'].replace('/','_') 
+def getPatientLabel(row,participantField='PatientId'):
+    return row[participantField].replace('/','_') 
 
 def getVisitLabel(row):
     return 'VISIT_'+str(int(row['SequenceNum']))
 
-def getStudyLabel(row):
-    return getPatientLabel(row)+'-'+getVisitLabel(row)
+def getStudyLabel(row,participantField='PatientId'):
+    return getPatientLabel(row,participantField)+'-'+getVisitLabel(row)
 
 def updateRow(project,dataset,row,imageResampledField,gzFileNames):
     for im in imageResampledField:
@@ -55,16 +67,28 @@ for row in ds["rows"]:
 
     #interesting files are processedDir/studyName_CT_notCropped_2mmVoxel.nii
     #asn processedDir/studyName_PET_notCropped_2mmVoxel.nii
-    gzFileNames={im:\
-            getStudyLabel(row)+'_'+im+'.nii.gz'\
+
+    #standard names provided by Zan and Daniel
+    baseFileNames={im:\
+            getStudyLabel(row,participantField)+'_'+im \
                 for im in imageResampledField}
     
+    #append suffix to base name for source files
+    gzSrcFileNames={im:baseFileNames[im]+'.nii.gz'\
+            for im in baseFileNames}
+
+    #add version to out files
+    gzOutFileNames={im:baseFileNames[im]+'_'+ver+'.nii.gz'\
+            for im in baseFileNames}
+
     #build/check remote directory structure
     remoteDir=fb.buildPathURL(project,\
-            ['preprocessedImages',getPatientLabel(row),getVisitLabel(row)])
+            ['preprocessedImages',getPatientLabel(row,participantField),\
+            getVisitLabel(row)])
 
+    #target files
     gzRemoteFiles={im:remoteDir+'/'+f\
-            for (im,f) in gzFileNames.items()}
+            for (im,f) in gzOutFileNames.items()}
 
     remoteFilePresent=[fb.entryExists(f)\
             for f in gzRemoteFiles.values()]
@@ -76,11 +100,11 @@ for row in ds["rows"]:
     if all(remoteFilePresent):
         print("Entry for row done.")
         updateRow(project,dataset,row,imageResampledField,\
-                gzFileNames)
+                gzOutFileNames)
         continue
 
-    inputDir=fb.buildPathURL(project,['segmentations']) 
-    inputFiles={im:inputDir+'/'+f for (im,f) in gzFileNames.items()}
+    inputDir=fb.buildPathURL(project,[sourceDir]) 
+    inputFiles={im:inputDir+'/'+f for (im,f) in gzSrcFileNames.items()}
     
     for im in inputFiles:
         f=inputFiles[im]
@@ -88,7 +112,7 @@ for row in ds["rows"]:
             print("Input file {} not found".format(f))
             continue
         print("Found {}".format(f))
-        localFile=os.path.join(tempBase,gzFileNames[im])
+        localFile=os.path.join(tempBase,gzSrcFileNames[im])
         print("Local {}".format(localFile))
         fb.readFileToFile(f,localFile)
         fb.writeFileToFile(localFile,gzRemoteFiles[im])
@@ -96,7 +120,7 @@ for row in ds["rows"]:
         os.remove(localFile)
 
     #update row and let it know where the processed files are
-    updateRow(project,dataset,row,imageResampledField,gzFileNames)
+    updateRow(project,dataset,row,imageResampledField,gzOutFileNames)
    
 
     if i==-1:

+ 0 - 83
pythonScripts/linkOrthanc.py

@@ -1,83 +0,0 @@
-import os
-import json
-import re
-import sys
-import datetime
-import re
-
-fhome=os.path.expanduser('~')
-sys.path.insert(1,fhome+'/software/src/labkeyInterface')
-import labkeyInterface
-import labkeyDatabaseBrowser
-
-fconfig=os.path.join(fhome,'.labkey','network.json')
-
-net=labkeyInterface.labkeyInterface()
-net.init(fconfig)
-db=labkeyDatabaseBrowser.labkeyDB(net)
-
-
-i=0
-projectOrthanc='Orthanc/Database'
-projectIPNU='iPNUMMretro/Study'
-
-ds=db.selectRows(projectIPNU,'study','Imaging',[])
-
-varList={'CT':['startswith','CT%20WB'],'PETWB':['eq','PET%20WB'],
-        'PETWBUncorrected':['eq','PET%20WB%20Uncorrected'],
-        'Topogram':['startswith','Topogram']}
-
-i=0
-
-for row in ds['rows']:
-
-    for var in varList:
-        print('Filtering for {}/{}'.format(var,varList[var][1]))
-        qfilter={}
-        qfilter['variable']='seriesDescription'
-        qfilter['value']=varList[var][1]
-        qfilter['oper']=varList[var][0]
-        
-        qfilter1={}
-        qfilter1['variable']='PatientId'
-        qfilter1['value']=row['PatientId']
-        qfilter1['oper']='eq'
-
-        #don't have dates, so I have to poll
-        qfilter2={}
-        qfilter2['variable']='studyDate'
-        qfilter2['oper']='dateeq'
-        fdate=row['date']
-        fdate=re.sub(r' (.*)$','',fdate)
-        fdate=re.sub(r'/',r'-',fdate)
-        qfilter2['value']=fdate
-
-
-        tfilter=[qfilter,qfilter1,qfilter2]
-        ds1=db.selectRows(projectOrthanc,'study','Imaging',tfilter)
-        print('[{}][{}][{}]: {}'.format(\
-                row['PatientId'],var,fdate,len(ds1['rows'])))
-        
-        
-        for r1 in ds1['rows']:
-            print("ID: {}, DESC: {}, DATE: {}".format(\
-                r1['PatientId'],r1['seriesDescription'],r1['studyDate']))
-            #print("Study date {}/{}".format(row['date'],r1['studyDate']))
-
-
-            
-        row[var]=len(ds1['rows'])
-        if len(ds1['rows'])==1:
-            row[var]=ds1['rows'][0]['orthancSeries']
-
-        if len(ds1['rows'])>1:
-            if var=='CT':
-                varC=[r1['orthancSeries']  for r1 in ds1['rows']\
-                        if r1['seriesDescription'].find('fov')<0] 
-                if len(varC)==1:
-                    row[var]=varC[0]
-
-       
-    db.modifyRows('update',projectIPNU,'study','Imaging',[row])
-        
-print("Done")

+ 87 - 18
pythonScripts/populateImagingFromOrthanc.py

@@ -1,3 +1,5 @@
+#date sorts studies from orthanc dataset into target study dataset
+
 import os
 import json
 import re
@@ -6,63 +8,130 @@ import datetime
 import re
 
 fhome=os.path.expanduser('~')
-sys.path.insert(1,fhome+'/software/src/labkeyInterface')
+fsetup=os.path.join(fhome,'.labkey','setup.json')
+with open(fsetup,'r') as f:
+    setup=json.load(f)
+
+sys.path.insert(0,setup['paths']['labkeyInterface'])
 import labkeyInterface
 import labkeyDatabaseBrowser
 
+sys.path.insert(0,setup['paths']['analysisInterface'])
+import analysisInterface
+
 fconfig=os.path.join(fhome,'.labkey','network.json')
 
 net=labkeyInterface.labkeyInterface()
 net.init(fconfig)
 db=labkeyDatabaseBrowser.labkeyDB(net)
+fb=labkeyFileBrowser.labkeyFileBrowser(net)
+
+parameterFile=sys.argv[1]
+runid=sys.argv[2];
+
+ana=analysisInterface.analysisInterface(db,fb,runid)
+ana.updateStatus(2)
+pars=ana.getParameters(parameterFile)
+
+if pars==None:
+    sys.exit()
+
 
 
 i=0
-projectOrthanc='Orthanc/Database'
-inputDataset='Imaging'
-projectStudy='iPNUMMretro/Study'
-outputDataset='Imaging1'
+#from orthancDatabase/Imaging dataset
+projectOrthanc=pars['Orthanc']['project']
+inputQuery=pars['Orthanc']['queryName']
+inputSchema=pars['Orthanc']['schemaName']
+inputParticipantField=pars['Orthanc']['participantField']
+
+#to target project dataset
+projectStudy=pars['Database']['project']
+#'iPNUMMretro/Study'
+#for prospective, set
+#projectStudy='IPNUMMprospektiva/Study'
+outputQuery=pars['Database']['queryName']
+outputSchema=pars['Database']['schemaName']
+#select patientId that are contained in the demographics dataset
+listQuery=pars['Database']['listQuery']
+dbParticipantField=pars['Database']['participantField']
+
+
+#make a list of patients
+dsDemo=db.selectRows(projectStudy,outputSchema,listQuery,[])
+patients=[row[dbParticipantField] for row in dsDemo['rows']]
+patients=list(set(patients))
 
-ds=db.selectRows(projectOrthanc,'study',inputDataset,[])
+patientListStr=""
+for p in patients:
+    if len(patientListStr)>0:
+        patientListStr+=";"
+    patientListStr+=p
+
+
+patientFilter={'variable':inputParticipantField,
+        'value':patientListStr,'oper':'in'}
+
+#takes orthanc as the baseline, selects from patient list
+ds=db.selectRows(projectOrthanc,inputSchema,inputQuery,[patientFilter])
 
 
 #single entry for the patientId/dicomStudy pair
-selectVars=['PatientId','dicomStudy']
+selectVars={dbParticipantField:inputParticipantField,\
+        'dicomStudy':'dicomStudy'}
 
-dates=[datetime.datetime.strptime(row['studyDate'],'%Y/%m/%d %H:%M:%S') for row in ds['rows']]
+dates=[datetime.datetime.strptime(row['studyDate'],'%Y/%m/%d %H:%M:%S') \
+        for row in ds['rows']]
+
+#date sorted entries
 idx=sorted(range(len(dates)),key=lambda k:dates[k])
 
+
+#historical traverse of all studies from inputDataset
 for j in range(len(dates)):
-    #row in ds['rows']:
+    
     row=ds['rows'][idx[j]]
 
     #skip series which don't match selected filters
     outvar='NONE'
     sd=row['seriesDescription']
     if sd=='PET WB':
-        outvar='PETWB'
+        outvar='PETWB_orthancId'
     if sd.find('CT WB')==0:
         if sd.find('fov')<0:
-            outvar='CT'
+            outvar='CT_orthancId'
 
+    #skip irrelevant series
     if outvar=='NONE':
         continue
 
     filters=[]
     for v in selectVars:
-        filters.append({'variable':v,'value':row[v],'oper':'eq'})
-    ds2=db.selectRows(projectStudy,'study',outputDataset,
-            [{'variable':'PatientId','value':row['PatientId'],'oper':'eq'}])
-    ds1=db.selectRows(projectStudy,'study',outputDataset,filters)
+        filters.append({'variable':v,\
+                'value':row[selectVars[v]],'oper':'eq'})
+
+    #ds2 are all studies by patient from sorted dataset
+    ds2=db.selectRows(projectStudy,outputSchema,outputQuery,
+            [{'variable':dbParticipantField,\
+                    'value':row[inputParticipantField],'oper':'eq'}])
+    
+    #ds1 is the matching row from output dataset 
+    ds1=db.selectRows(projectStudy,outputSchema,outputQuery,filters)
     if len(ds1['rows'])>1:
-        print('ERROR: too many matches for {}/{}'.format(row['PatientId'],row['dicomStudy']))
+        print('ERROR: too many matches for {}/{}'.\
+                format(row[inputParticipantField],row['dicomStudy']))
         continue
+
     mode='update'
     outRow={}
     if len(ds1['rows'])==0:
         mode='insert'
-        outRow['PatientId']=row['PatientId']
+        outRow[dbParticipantField]=row[inputParticipantField]
+        
+        #setting sequence number to length of already included studies
+        #sorted by date makes it historically incremental
         outRow['SequenceNum']=len(ds2['rows'])
+
         outRow['dicomStudy']=row['dicomStudy']
     else:
         outRow=ds1['rows'][0]
@@ -70,7 +139,7 @@ for j in range(len(dates)):
     outRow[outvar]=row['orthancSeries']
     outRow['studyDate']=row['studyDate']
 
-    status=db.modifyRows(mode,projectStudy,'study',outputDataset,[outRow])
+    status=db.modifyRows(mode,projectStudy,outputSchema,outputQuery,[outRow])
     print('{}'.format(status))
     if j==50:
         break

+ 143 - 129
pythonScripts/preprocess.py

@@ -6,67 +6,16 @@ import nibabel
 import shutil
 import sys
 
-shome=os.path.expanduser('~nixUser')
-fhome=os.path.expanduser('~')
-with open(os.path.join(fhome,".labkey","setup.json")) as f:
-    setup=json.load(f)
+#nothing gets done if you do import
 
-sys.path.insert(0,setup["paths"]["labkeyInterface"])
-import labkeyInterface
-import labkeyDatabaseBrowser
-import labkeyFileBrowser
-
-sys.path.insert(0,setup["paths"]["orthancInterface"])
-import orthancInterface
-import orthancFileBrowser
-
-#sys.path.insert(1,shome+'/software/src/IPNUMM/dicomUtils')
-#import loadDicom
-
-
-fconfig=os.path.join(fhome,'.labkey','network.json')
-
-#matlab=os.path.join("/","data","software","install","matlab","bin","matlab")
-matlab=setup["paths"]["matlab"]
-#os.path.join(fhome,"software","install","matlab","bin","matlab")
-generalCodes=setup["paths"]["generalCodes"]
-#ios.path.join(fhome,"software","src","generalCodes")
-niftiTools=setup["paths"]["niftiTools"]
-#niftiTools=os.path.join(fhome,"software","src","NifTiScripts")
-
-net=labkeyInterface.labkeyInterface()
-net.init(fconfig)
-db=labkeyDatabaseBrowser.labkeyDB(net)
-fb=labkeyFileBrowser.labkeyFileBrowser(net)
-
-
-onet=orthancInterface.orthancInterface()
-onet.init(fconfig)
-ofb=orthancFileBrowser.orthancFileBrowser(onet)
-
-hi=0
-project='iPNUMMretro/Study'
-dataset='Imaging'
-tempBase=os.path.join(fhome,'temp')
-
-#all images from database
-ds=db.selectRows(project,'study',dataset,[])
-#imageSelector={"CT":"CT","PET":"PETWB"};
-imageSelector={"CT":"CT_orthancId","PET":"PETWB_orthancId"}
-imageResampledField={"CT":"ctResampled","PET":"petResampled"}
-
-#projectNIfTIBase=os.path.join(labkeyBase,'files',project,'@files/nifti')
-#use webdav to transfer file (even though it is localhost)
-
-
-def getPatientLabel(row):
-    return row['PatientId'].replace('/','_') 
+def getPatientLabel(row,participantField='PatientId'):
+    return row[participantField].replace('/','_') 
 
 def getVisitLabel(row):
     return 'VISIT_'+str(int(row['SequenceNum']))
 
-def getStudyLabel(row):
-    return getPatientLabel(row)+'-'+getVisitLabel(row)
+def getStudyLabel(row,participantField='PatientId'):
+    return getPatientLabel(row,participantField)+'-'+getVisitLabel(row)
 
 def runPreprocess_DM(matlab,generalCodes,niftiTools,studyDir):
 
@@ -86,7 +35,8 @@ def runPreprocess_DM(matlab,generalCodes,niftiTools,studyDir):
     return True
 
 
-def getDicom(ofb,row,zipDir,rawDir,im,imageSelector):
+def getDicom(ofb,row,zipDir,rawDir,im,imageSelector,\
+        participantField='PatientId'):
 
     #Load the dicom zip file and unzips it. If zip file is already at the expected path, it skips the loading step
 
@@ -97,7 +47,8 @@ def getDicom(ofb,row,zipDir,rawDir,im,imageSelector):
         return False
 
     print("{}: {}".format(im,seriesId))
-    fname=os.path.join(zipDir,getStudyLabel(row)+'_'+im+".zip");
+    fname=os.path.join(zipDir,\
+            getStudyLabel(row,participantField)+'_'+im+".zip");
 
     #copy data from orthanc
     if os.path.isfile(fname):
@@ -127,113 +78,176 @@ def getDicom(ofb,row,zipDir,rawDir,im,imageSelector):
 
     return True    
 
-def updateRow(project,dataset,row,imageResampledField,gzFileNames):
-    row['patientCode']=getPatientLabel(row)
+def updateRow(project,dataset,row,imageResampledField,gzFileNames,\
+        participantField='PatientId'):
+    row['patientCode']=getPatientLabel(row,participantField)
     row['visitCode']=getVisitLabel(row)
     for im in imageResampledField:
         row[imageResampledField[im]]=gzFileNames[im]
     db.modifyRows('update',project,'study',dataset,[row])
  
 
+def main(parameterFile):
+    shome=os.path.expanduser('~nixUser')
+    fhome=os.path.expanduser('~')
+    with open(os.path.join(fhome,".labkey","setup.json")) as f:
+        setup=json.load(f)
+
+    sys.path.insert(0,setup["paths"]["labkeyInterface"])
+    import labkeyInterface
+    import labkeyDatabaseBrowser
+    import labkeyFileBrowser
+
+    sys.path.insert(0,setup["paths"]["orthancInterface"])
+    import orthancInterface
+    import orthancFileBrowser
+
+    fconfig=os.path.join(fhome,'.labkey','network.json')
+
+    matlab=setup["paths"]["matlab"]
+    generalCodes=setup["paths"]["generalCodes"]
+    niftiTools=setup["paths"]["niftiTools"]
+
+    net=labkeyInterface.labkeyInterface()
+    net.init(fconfig)
+    db=labkeyDatabaseBrowser.labkeyDB(net)
+    fb=labkeyFileBrowser.labkeyFileBrowser(net)
+
+    onet=orthancInterface.orthancInterface()
+    onet.init(fconfig)
+    ofb=orthancFileBrowser.orthancFileBrowser(onet)
+
 
-i=0
-for row in ds["rows"]:
+    with open(parameterFile) as f:
+        pars=json.load(f)
 
-    #interesting files are processedDir/studyName_CT_notCropped_2mmVoxel.nii
-    #asn processedDir/studyName_PET_notCropped_2mmVoxel.nii
-    volumeFileNames={im:\
-            getStudyLabel(row)+'_'+im+
+    hi=0
+    project=pars['Database']['project']
+    dataset=pars['Database']['queryName']
+    schema=pars['Database']['schemaName']
+
+    tempBase=os.path.join(fhome,'temp')
+
+
+    participantField=pars['Database']['participantField']
+
+    #all images from database
+    ds=db.selectRows(project,schema,dataset,[])
+    imageSelector={"CT":"CT_orthancId","PET":"PETWB_orthancId"}
+    #output
+    imageResampledField={"CT":"ctResampled","PET":"petResampled","patientmask":"ROImask"}
+
+    #use webdav to transfer file (even though it is localhost)
+
+
+ 
+
+    i=0
+    for row in ds["rows"]:
+
+        #interesting files are processedDir/studyName_CT_notCropped_2mmVoxel.nii
+        #asn processedDir/studyName_PET_notCropped_2mmVoxel.nii
+        volumeFileNames={im:\
+            getStudyLabel(row,participantField)+'_'+im+
             '_notCropped_2mmVoxel.nii'\
-                for im in imageSelector}
-    gzFileNames={im:f+".gz" \
+                for im in imageResampledField}
+        gzFileNames={im:f+".gz" \
             for (im,f) in volumeFileNames.items()}
     
-    #build/check remote directory structure
-    remoteDir=fb.buildPathURL(project,['preprocessedImages',getPatientLabel(row),getVisitLabel(row)])
+        #build/check remote directory structure
+        remoteDir=fb.buildPathURL(project,['preprocessedImages',\
+            getPatientLabel(row,participantField),getVisitLabel(row)])
 
-    gzRemoteFiles={im:remoteDir+'/'+f\
+        gzRemoteFiles={im:remoteDir+'/'+f\
             for (im,f) in gzFileNames.items()}
 
-    remoteFilePresent=[fb.entryExists(f)\
+        remoteFilePresent=[fb.entryExists(f)\
             for f in gzRemoteFiles.values()]
 
-    for f in gzRemoteFiles.values():
-        print("[{}]: [{}]".format(f,fb.entryExists(f)))
+        for f in gzRemoteFiles.values():
+            print("[{}]: [{}]".format(f,fb.entryExists(f)))
 
 
-    if all(remoteFilePresent):
-        print("Entry for row done.")
-        updateRow(project,dataset,row,imageResampledField,\
-                gzFileNames)
-        continue
+        if all(remoteFilePresent):
+            print("Entry for row done.")
+            updateRow(project,dataset,row,imageResampledField,\
+                gzFileNames,participantField)
+            continue
 
     
-    #setup the directory structure for preprocess_DM
-    studyDir=os.path.join(tempBase,getStudyLabel(row))
-    if not os.path.isdir(studyDir):
-        os.mkdir(studyDir)
+        #setup the directory structure for preprocess_DM
+        studyDir=os.path.join(tempBase,getStudyLabel(row,participantField))
+        if not os.path.isdir(studyDir):
+            os.mkdir(studyDir)
 
-    rawDir=os.path.join(studyDir,'Raw')
-    if not os.path.isdir(rawDir):
-        os.mkdir(rawDir)
+        rawDir=os.path.join(studyDir,'Raw')
+        if not os.path.isdir(rawDir):
+            os.mkdir(rawDir)
 
-    zipDir=os.path.join(studyDir,'Zip')
-    if not os.path.isdir(zipDir):
-        os.mkdir(zipDir)
+        zipDir=os.path.join(studyDir,'Zip')
+        if not os.path.isdir(zipDir):
+            os.mkdir(zipDir)
 
-    processedDir=os.path.join(studyDir,'Processed')
-    if not os.path.isdir(processedDir):
-        os.mkdir(processedDir)
+        processedDir=os.path.join(studyDir,'Processed')
+        if not os.path.isdir(processedDir):
+            os.mkdir(processedDir)
 
-    #specify local file names with path 
-    volumeFiles={im:os.path.join(processedDir,f)\
+        #specify local file names with path 
+        volumeFiles={im:os.path.join(processedDir,f)\
             for (im,f) in volumeFileNames.items()}
-    gzFiles={im:f+".gz"\
+        gzFiles={im:f+".gz"\
             for (im,f) in volumeFiles.items()}
 
-    filesPresent=[os.path.isfile(f) for f in gzFiles.values()]
+        filesPresent=[os.path.isfile(f) for f in gzFiles.values()]
     
     
-    if not all(filesPresent):
+        if not all(filesPresent):
 
-        for im in imageSelector:
-            #checks if raw files are already loaded
-            getDicom(ofb,row,zipDir,rawDir,im,imageSelector)
+            #use imageSelector -> inputs
+            for im in imageSelector:
+                #checks if raw files are already loaded
+                getDicom(ofb,row,zipDir,rawDir,im,imageSelector,\
+                    participantField)
 
 
     
-        #preprocess and zip
-        ok=runPreprocess_DM(matlab,generalCodes,niftiTools,studyDir)
-        if not ok:
-            shutil.rmtree(studyDir)
-            continue
-
-
-        for f in volumeFiles.values():
-            print("Running gzip {}".format(f))
-            outText=subprocess.check_output(["/bin/gzip",f])
-            print(outText.decode('utf-8'))
-
-    #upload local files to remote
-    for im in gzFiles:
-    #for local,remote in zip(gzFiles,gzRemoteFiles):
-        local=gzFiles[im]
-        remote=gzRemoteFiles[im]
-        print("Uploading {}".format(local))
-        fb.writeFileToFile(local,remote)
-
-
-    #update row and let it know where the processed files are
-    updateRow(project,dataset,row,imageResampledField,gzFileNames)
+            #preprocess and zip
+            ok=runPreprocess_DM(matlab,generalCodes,niftiTools,studyDir)
+            if not ok:
+                shutil.rmtree(studyDir)
+                continue
+
+
+            for f in volumeFiles.values():
+                print("Running gzip {}".format(f))
+                outText=subprocess.check_output(["/bin/gzip",f])
+                print(outText.decode('utf-8'))
+
+        #upload local files to remote
+        for im in gzFiles:
+        #for local,remote in zip(gzFiles,gzRemoteFiles):
+            local=gzFiles[im]
+            remote=gzRemoteFiles[im]
+            print("Uploading {}".format(local))
+            fb.writeFileToFile(local,remote)
+
+
+        #update row and let it know where the processed files are
+        updateRow(project,dataset,row,imageResampledField,gzFileNames,\
+            participantField)
    
 
-    #cleanup
-    shutil.rmtree(studyDir)
+        #cleanup
+        shutil.rmtree(studyDir)
     
 
-    if i==-1:
-        break
-    i=i+1
+        if i==-1:
+            break
+        i=i+1
+
+    print("Done")
+
 
+if __name__ == '__main__':
+    main(sys.argv[1])
 
-print("Done")

+ 124 - 0
pythonScripts/runSegmentation.py

@@ -0,0 +1,124 @@
+import os
+import json
+import re
+import subprocess
+import nibabel
+import shutil
+import sys
+
+#nothing gets done if you do import
+
+def getPatientLabel(row,participantField='PatientId'):
+    return row[participantField].replace('/','_') 
+
+def getVisitLabel(row):
+    return 'VISIT_'+str(int(row['SequenceNum']))
+
+def getStudyLabel(row,participantField='PatientId'):
+    return getPatientLabel(row,participantField)+'-'+getVisitLabel(row)
+
+
+def updateRow(project,dataset,row,imageResampledField,gzFileNames,\
+        participantField='PatientId'):
+    row['patientCode']=getPatientLabel(row,participantField)
+    row['visitCode']=getVisitLabel(row)
+    for im in imageResampledField:
+        row[imageResampledField[im]]=gzFileNames[im]
+    db.modifyRows('update',project,'study',dataset,[row])
+ 
+def replacePatterns(infile,outfile,replacePatterns):
+    of=open(outfile,'w')
+    with open(infile,'r') as f:
+        data=f.read()
+        for p in replacePatterns:
+            val=replacePatterns[p]
+            data=re.sub(p,val,data)
+    of.write(data)
+    of.close()
+    
+def valueSubstitution(pars,val):
+    if val.find('__home__')>-1:
+        val=re.sub(r'__home__',os.path.expanduser('~'),val)
+
+    return path
+
+def main(parameterFile):
+    
+    fhome=os.path.expanduser('~')
+    with open(os.path.join(fhome,".labkey","setup.json")) as f:
+        setup=json.load(f)
+
+    sys.path.insert(0,setup["paths"]["labkeyInterface"])
+    import labkeyInterface
+    import labkeyDatabaseBrowser
+    import labkeyFileBrowser
+
+    sys.path.append(setup['paths']['parseConfig'])
+    import parseConfig
+
+
+    fconfig=os.path.join(fhome,'.labkey','network.json')
+
+    net=labkeyInterface.labkeyInterface()
+    net.init(fconfig)
+    db=labkeyDatabaseBrowser.labkeyDB(net)
+    fb=labkeyFileBrowser.labkeyFileBrowser(net)
+
+    with open(parameterFile) as f:
+        pars=json.load(f)
+
+    pars=parseConfig.convert(pars)
+    pars=parseConfig.convertValues(pars)
+
+    hi=0
+    project=pars['project']
+    dataset=pars['targetQuery']
+    schema=pars['targetSchema']
+
+
+    tempBase=pars['tempBase']
+    if not os.path.isdir(tempBase):
+        os.makedirs(tempBase)
+
+
+    participantField=pars['participantField']
+
+    #all images from database
+    ds=db.selectRows(project,schema,dataset,[])
+
+    
+    #imageSelector={"CT":"CT","PET":"PETWB_orthancId"}
+    #input
+    images=pars['images']
+    #use webdav to transfer file (even though it is localhost)
+
+    tempNames={im:os.path.join(tempBase,images[im]['tempFile']) for im in images}
+ 
+
+    #update the config
+    cfg=pars['deepmedic']['config']
+    for c in cfg:
+        replacePatterns(cfg[c]['template'],\
+                cfg[c]['out'],\
+                pars['replacePattern'])
+    i=0
+    for row in ds["rows"]:
+        
+        #download to temp file (could be a fixed name)
+        baseDir=fb.formatPathURL(project,pars['imageDir']+'/'+\
+            getPatientLabel(row,participantField)+'/'+\
+            getVisitLabel(row))
+        for im in images:
+            fb.readFileToFile(baseDir+'/'+row[images[im]['queryField']],
+                os.path.join(tempBase,images[im]['tempFile']))
+            
+        break        
+        i=i+1
+
+    print("Done")
+
+
+if __name__ == '__main__':
+    main(sys.argv[1])
+    #sys.exit()
+

+ 0 - 96
pythonScripts/scanOrthanc.py

@@ -1,96 +0,0 @@
-import os
-import json
-import re
-import sys
-
-
-fhome=os.path.expanduser('~')
-sys.path.insert(1,fhome+'/software/src/labkeyInterface')
-import labkeyInterface
-import labkeyDatabaseBrowser
-
-sys.path.insert(1,fhome+'/software/src/orthancInterface')
-import orthancInterface
-import orthancDatabaseBrowser
-
-fconfig=os.path.join(fhome,'.labkey','network.json')
-
-net=labkeyInterface.labkeyInterface()
-net.init(fconfig)
-db=labkeyDatabaseBrowser.labkeyDB(net)
-
-
-onet=orthancInterface.orthancInterface()
-onet.init(fconfig)
-odb=orthancDatabaseBrowser.orthancDB(onet)
-
-i=0
-project='Orthanc/Database'
-
-patients=odb.getPatients()
-
-for p in patients:
-    pdata=odb.getPatientData(p)
-    dicom=pdata['MainDicomTags']
-    patientId=dicom['PatientID']
-    
-    print("Patient: {} ID: {}".format(p,patientId))
-
-    qfilter={'variable':'PatientId','value':patientId,'oper':'eq'}
-    ds=db.selectRows(project,'study','Demographics',[qfilter])
-    if len(ds['rows'])==0:
-        row={}
-        row['PatientId']=patientId
-        row['birthDate']=dicom['PatientBirthDate']
-        row['PatientName']=dicom['PatientName']
-        row['OrthancId']=p
-        db.modifyRows('insert',project,'study','Demographics',[row])
-
-    for s in pdata['Studies']:
-        sdata=odb.getStudyData(s)
-        sdicom=sdata['MainDicomTags']
-        sid=sdicom['StudyInstanceUID']
-        print('Study: {}/{}'.format(s,sid))
-        #print('Data: {}'.format(sdata))
-        sdate=sdicom['StudyDate']
-        #continue
-        
-        
-        for se in sdata['Series']:
-
-            qfilter={'variable':'orthancSeries','value':se,'oper':'eq'}
-            ds=db.selectRows(project,'study','Imaging',[qfilter])
-            if len(ds['rows'])>0:
-                continue
-
-            #count existing entries for patient
-            qfilter={'variable':'PatientId','value':patientId,'oper':'eq'}
-            ds=db.selectRows(project,'study','Imaging',[qfilter])
-            seqNum=len(ds['rows'])
-
-            sedata=odb.getSeriesData(se)
-            sedicom=sedata['MainDicomTags']
-            seid=sedicom['SeriesInstanceUID']
-            print('Series: {}/{}'.format(se,seid))
-            #print('Data: {}'.format(sedata))
-            seDesc="NONE"
-            try:
-                seDesc=sedicom['SeriesDescription']
-            except KeyError:
-                pass
-
-            print('ID: {}.'.format(seDesc))
-
-            row={}
-            row['PatientId']=patientId
-            row['sequenceNum']=seqNum
-            row['dicomStudy']=sid
-            row['orthancStudy']=s
-            row['dicomSeries']=seid
-            row['orthancSeries']=se
-            row['studyDate']=sdate
-            row['seriesDescription']=seDesc
-            db.modifyRows('insert',project,'study','Imaging',[row])
-
-
-print("Done")

+ 133 - 0
segmentation/model/modelConfig.cfg

@@ -0,0 +1,133 @@
+# -*- coding: utf-8 -*-
+#  Default values are set internally, if the corresponding parameter is not found in the configuration file.
+
+#  [Optional but highly suggested] The name will be used in the filenames when saving the model.
+#  Default: "cnnModel"
+modelName = "model_name"
+
+#  [Required] The main folder that the output will be placed.
+folderForOutput = "../output/"
+
+
+#  ================ MODEL PARAMETERS =================
+
+#  [Required] The number of classes in the task. Including background!
+numberOfOutputClasses = 19 # see C:\Users\DTHUFF\Documents\Research\Projects\semi_sup_segmentation\visercal3\Documentation\radLexIDs.xlsx
+# 0=background
+# 1=liver
+# 2=spleen
+# 3=lung
+# 4=Thyroid
+# 5=kidney
+# 6=Pancreas
+# 7=Gallbladder ##
+# 8=Bladder
+# 9=Aorta
+# 10=Trachea
+# 11=Sternum ##
+# 12=vertebra L1 ##
+# 13=adrenal ##
+# 14=psoas major
+# 15=Rectus
+# 16=Bowel
+# 17=Stomach 
+# 18=Heart ##
+
+#  [Required] The number of input channels, eg number of MRI modalities.
+numberOfInputChannels = 1
+
+#  +++++++++++Normal pathway+++++++++++
+#  [Required] This list should have as many entries as the number of layers I want the normal-pathway to have.
+#  Each entry is an integer that specifies the number of Feature Maps to use in each of the layers.
+numberFMsPerLayerNormal = [30, 30, 40, 40, 40, 40, 50, 50]
+#  [Required] This list should have as many entries as the number of layers in the normal pathway.
+#  Each entry should be a sublist with 3 entries. These should specify the dimensions of the kernel at the corresponding layer.
+kernelDimPerLayerNormal = [[3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3]]
+
+#  [Optional] List with number of layers, at the output of which to make a residual connection with the input of the previous layer. Ala Kaiming He et al, "Deep Residual Learning for Image Recognition".
+#  Note: Numbering starts from 1 for the first layer, which is not an acceptable value (no previous layer).
+#  Example: [4,6,8] will connect (add) to the output of Layer 4 the input of Layer 3. Also, input to 5th will be added to output of 6th, and input of 7th to output of 8th.
+#  Default: [], no residual connections
+layersWithResidualConnNormal = [4,6,8]
+
+#  [Optional] Layers to make of lower rank. Ala Yani Ioannou et al, "Training CNNs with Low-Rank Filters For Efficient Image Classification".
+#  Example: [3,5] will make the 3rd and 5th layers of lower rank.
+#  Default: []
+lowerRankLayersNormal = []
+
+#  +++++++++++Subsampled pathway+++++++++++
+#  [Optional] Specify whether to use a subsampled pathway. If False, all subsampled-related parameters will be read but disregarded in the model-construction.
+#  Default: False
+useSubsampledPathway = True
+
+#  [Optionals] The below parameters specify the subsampled-pathway architecture in a similar way as the normal.
+#  If they are ommitted and useSubsampledPathway is set to True, the subsampled pathway will be made similar to the normal pathway (suggested for easy use).
+#  [WARN] Subsampled pathway MUST have the same size of receptive field as the normal. Limitation in the code. User could easily specify different number of FMs. But care must be given if number of layers is changed. In this case, kernel sizes should also be adjusted to achieve same size of Rec.Field.
+numberFMsPerLayerSubsampled = [30, 30, 40, 40, 40, 40, 50, 50]
+kernelDimPerLayerSubsampled = [[3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3]]
+
+#  [Optional] How much to downsample the image that the subsampled-pathway processes.
+#  Requires either a) list of 3 integers, or b) list of lists of 3 integers.
+#  Input example a) [3,3,3]   Creates one additional parallel pathway, where input is subsampled by 3 in the x,y,z axis (the 3 elements of the list).
+#  Input example b) [[3,3,3], [5,5,5]]   Creates two additional parallel pathways. One with input subsampled by [3,3,3], and one subsampled by [5,5,5]. If not specified, each path mirrors the previous.
+#  Default: [[3,3,3]]
+subsampleFactor = [[3,3,3], [5,5,5]]
+
+#  [Optional] Residual Connections for subsampled pathway. See corresponding parameter for normal pathway.
+#  Default: mirrors the normal pathway, no residual connections
+layersWithResidualConnSubsampled = [4,6,8]
+
+#  [Optional] Layers to make of lower rank. See corresponding parameter for normal pathway.
+#  Default: Mirrors the normal pathway
+#lowerRankLayersSubsampled = []
+
+#  +++++++++++FC Layers+++++++++++
+#  [Optional] After the last layers of the normal and subsampled pathways are concatenated, additional Fully Connected hidden layers can be added before the final classification layer.
+#  Specify a list, with as many entries as the number of ADDITIONAL FC layers (other than the classification layer) to add. The entries specify the number of Feature Maps to use.
+#  Default: []
+numberFMsPerLayerFC = [250, 250]
+
+#  [Optional] Specify dimensions of the kernel in the first FC layer. This kernel combines the features from multiple scales. Applies to the final Classification layer if no hidden FC layers in network.
+#  Note: convolution with this kernel retains the size of the FMs (input is padded).
+#  Default: [1,1,1]
+kernelDimFor1stFcLayer = [3,3,3]
+
+#  [Optional] Residual Connections for the FC hidden layers. See corresponding parameter for normal pathway.
+#  Default: [], no connections.
+layersWithResidualConnFC = [2]
+
+#  +++++++++++Size of Image Segments+++++++++++
+#  DeepMedic does not process patches of the image, but larger image-segments. Specify their size here.
+
+#  [Required] Size of training segments influence the captured distribution of samples from the different classes (see DeepMedic paper)
+segmentsDimTrain = [37,37,37]
+#  [Optional] The size of segments to use during the validation-on-samples process that is performed throughout training if requested.
+#  Default: equal to receptive field, to validate on patches.
+segmentsDimVal = [17,17,17]
+#  [Optional] Bigger image segments for Inference are safe to use and only speed up the process. Only limitation is the GPU memory.
+#  Default: equal to the training segment.
+segmentsDimInference = [45,45,45]
+
+
+#  [Optionals] Dropout Rates on the input connections of the various layers. Each list should have as many entries as the number of layers in the corresponding pathway.
+#  0 = no dropout. 1= 100% drop of the neurons. Empty list for no dropout.
+#  Default: []
+dropoutRatesNormal = []
+dropoutRatesSubsampled = []
+#  Default: 50% dropout on every Fully Connected layer except for the first one after the concatenation
+#  Note: The list for FC rates should have one additional entry in comparison to "numberFMsPerLayerFC", for the classification layer.
+dropoutRatesFc = [0.0, 0.5, 0.5] # +1 for the classification layer!
+
+#  [Optional] Initialization method for the conv kernel weights.
+#  Options: ["normal", std] for sampling from N(0, std). ["fanIn", scale] for scaling variance with (scale/fanIn). Eg ["fanIn", 2] initializes ala "Delving Deep into Rectifiers".
+#  Default: ["fanIn", 2]
+convWeightsInit = ["fanIn", 2]
+#  [Optional] Activation Function for all convolutional layers. Allowed: "linear", "relu", "prelu", "elu", "selu"
+#  Default: "prelu"
+activationFunction = "prelu"
+
+#  [Optional] Batch Normalization uses a rolling average of the mus and std for inference.
+#  Specify over how many batches (SGD iterations) this moving average should be computed. Value <= 0 disables BN.
+#  Default : 60
+rollAverageForBNOverThatManyBatches = 60
+

+ 133 - 0
segmentation/model/modelConfig.cfg.template

@@ -0,0 +1,133 @@
+# -*- coding: utf-8 -*-
+#  Default values are set internally, if the corresponding parameter is not found in the configuration file.
+
+#  [Optional but highly suggested] The name will be used in the filenames when saving the model.
+#  Default: "cnnModel"
+modelName = "model_name"
+
+#  [Required] The main folder that the output will be placed.
+folderForOutput = "__workDir__/output"
+
+
+#  ================ MODEL PARAMETERS =================
+
+#  [Required] The number of classes in the task. Including background!
+numberOfOutputClasses = 19 # see C:\Users\DTHUFF\Documents\Research\Projects\semi_sup_segmentation\visercal3\Documentation\radLexIDs.xlsx
+# 0=background
+# 1=liver
+# 2=spleen
+# 3=lung
+# 4=Thyroid
+# 5=kidney
+# 6=Pancreas
+# 7=Gallbladder ##
+# 8=Bladder
+# 9=Aorta
+# 10=Trachea
+# 11=Sternum ##
+# 12=vertebra L1 ##
+# 13=adrenal ##
+# 14=psoas major
+# 15=Rectus
+# 16=Bowel
+# 17=Stomach 
+# 18=Heart ##
+
+#  [Required] The number of input channels, eg number of MRI modalities.
+numberOfInputChannels = 1
+
+#  +++++++++++Normal pathway+++++++++++
+#  [Required] This list should have as many entries as the number of layers I want the normal-pathway to have.
+#  Each entry is an integer that specifies the number of Feature Maps to use in each of the layers.
+numberFMsPerLayerNormal = [30, 30, 40, 40, 40, 40, 50, 50]
+#  [Required] This list should have as many entries as the number of layers in the normal pathway.
+#  Each entry should be a sublist with 3 entries. These should specify the dimensions of the kernel at the corresponding layer.
+kernelDimPerLayerNormal = [[3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3]]
+
+#  [Optional] List with number of layers, at the output of which to make a residual connection with the input of the previous layer. Ala Kaiming He et al, "Deep Residual Learning for Image Recognition".
+#  Note: Numbering starts from 1 for the first layer, which is not an acceptable value (no previous layer).
+#  Example: [4,6,8] will connect (add) to the output of Layer 4 the input of Layer 3. Also, input to 5th will be added to output of 6th, and input of 7th to output of 8th.
+#  Default: [], no residual connections
+layersWithResidualConnNormal = [4,6,8]
+
+#  [Optional] Layers to make of lower rank. Ala Yani Ioannou et al, "Training CNNs with Low-Rank Filters For Efficient Image Classification".
+#  Example: [3,5] will make the 3rd and 5th layers of lower rank.
+#  Default: []
+lowerRankLayersNormal = []
+
+#  +++++++++++Subsampled pathway+++++++++++
+#  [Optional] Specify whether to use a subsampled pathway. If False, all subsampled-related parameters will be read but disregarded in the model-construction.
+#  Default: False
+useSubsampledPathway = True
+
+#  [Optionals] The below parameters specify the subsampled-pathway architecture in a similar way as the normal.
+#  If they are ommitted and useSubsampledPathway is set to True, the subsampled pathway will be made similar to the normal pathway (suggested for easy use).
+#  [WARN] Subsampled pathway MUST have the same size of receptive field as the normal. Limitation in the code. User could easily specify different number of FMs. But care must be given if number of layers is changed. In this case, kernel sizes should also be adjusted to achieve same size of Rec.Field.
+numberFMsPerLayerSubsampled = [30, 30, 40, 40, 40, 40, 50, 50]
+kernelDimPerLayerSubsampled = [[3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3], [3,3,3]]
+
+#  [Optional] How much to downsample the image that the subsampled-pathway processes.
+#  Requires either a) list of 3 integers, or b) list of lists of 3 integers.
+#  Input example a) [3,3,3]   Creates one additional parallel pathway, where input is subsampled by 3 in the x,y,z axis (the 3 elements of the list).
+#  Input example b) [[3,3,3], [5,5,5]]   Creates two additional parallel pathways. One with input subsampled by [3,3,3], and one subsampled by [5,5,5]. If not specified, each path mirrors the previous.
+#  Default: [[3,3,3]]
+subsampleFactor = [[3,3,3], [5,5,5]]
+
+#  [Optional] Residual Connections for subsampled pathway. See corresponding parameter for normal pathway.
+#  Default: mirrors the normal pathway, no residual connections
+layersWithResidualConnSubsampled = [4,6,8]
+
+#  [Optional] Layers to make of lower rank. See corresponding parameter for normal pathway.
+#  Default: Mirrors the normal pathway
+#lowerRankLayersSubsampled = []
+
+#  +++++++++++FC Layers+++++++++++
+#  [Optional] After the last layers of the normal and subsampled pathways are concatenated, additional Fully Connected hidden layers can be added before the final classification layer.
+#  Specify a list, with as many entries as the number of ADDITIONAL FC layers (other than the classification layer) to add. The entries specify the number of Feature Maps to use.
+#  Default: []
+numberFMsPerLayerFC = [250, 250]
+
+#  [Optional] Specify dimensions of the kernel in the first FC layer. This kernel combines the features from multiple scales. Applies to the final Classification layer if no hidden FC layers in network.
+#  Note: convolution with this kernel retains the size of the FMs (input is padded).
+#  Default: [1,1,1]
+kernelDimFor1stFcLayer = [3,3,3]
+
+#  [Optional] Residual Connections for the FC hidden layers. See corresponding parameter for normal pathway.
+#  Default: [], no connections.
+layersWithResidualConnFC = [2]
+
+#  +++++++++++Size of Image Segments+++++++++++
+#  DeepMedic does not process patches of the image, but larger image-segments. Specify their size here.
+
+#  [Required] Size of training segments influence the captured distribution of samples from the different classes (see DeepMedic paper)
+segmentsDimTrain = [37,37,37]
+#  [Optional] The size of segments to use during the validation-on-samples process that is performed throughout training if requested.
+#  Default: equal to receptive field, to validate on patches.
+segmentsDimVal = [17,17,17]
+#  [Optional] Bigger image segments for Inference are safe to use and only speed up the process. Only limitation is the GPU memory.
+#  Default: equal to the training segment.
+segmentsDimInference = [45,45,45]
+
+
+#  [Optionals] Dropout Rates on the input connections of the various layers. Each list should have as many entries as the number of layers in the corresponding pathway.
+#  0 = no dropout. 1= 100% drop of the neurons. Empty list for no dropout.
+#  Default: []
+dropoutRatesNormal = []
+dropoutRatesSubsampled = []
+#  Default: 50% dropout on every Fully Connected layer except for the first one after the concatenation
+#  Note: The list for FC rates should have one additional entry in comparison to "numberFMsPerLayerFC", for the classification layer.
+dropoutRatesFc = [0.0, 0.5, 0.5] # +1 for the classification layer!
+
+#  [Optional] Initialization method for the conv kernel weights.
+#  Options: ["normal", std] for sampling from N(0, std). ["fanIn", scale] for scaling variance with (scale/fanIn). Eg ["fanIn", 2] initializes ala "Delving Deep into Rectifiers".
+#  Default: ["fanIn", 2]
+convWeightsInit = ["fanIn", 2]
+#  [Optional] Activation Function for all convolutional layers. Allowed: "linear", "relu", "prelu", "elu", "selu"
+#  Default: "prelu"
+activationFunction = "prelu"
+
+#  [Optional] Batch Normalization uses a rolling average of the mus and std for inference.
+#  Specify over how many batches (SGD iterations) this moving average should be computed. Value <= 0 disables BN.
+#  Default : 60
+rollAverageForBNOverThatManyBatches = 60
+

BIN
segmentation/saved_models/DM_defaults.DM_train_VISCERAL16_Fold1.final.2019-10-01.07.46.19.932616.model.ckpt.data-00000-of-00001


BIN
segmentation/saved_models/DM_defaults.DM_train_VISCERAL16_Fold1.final.2019-10-01.07.46.19.932616.model.ckpt.index


BIN
segmentation/saved_models/DM_defaults.DM_train_qtii_LABELMASKS4.final.2020-10-31.05.59.36.425298.model.ckpt.data-00000-of-00001


BIN
segmentation/saved_models/DM_defaults.DM_train_qtii_LABELMASKS4.final.2020-10-31.05.59.36.425298.model.ckpt.index


+ 6 - 0
segmentation/saved_models/INFO_ABOUT_MODELS.txt

@@ -0,0 +1,6 @@
+Retrained on more patients, improved thyroid performance, only for 4 organs, labels are changed to 1-4!
+DM_defaults.DM_train_qtii_LABELMASKS4.final.2020-10-31.05.59.36.425298.model.ckpt
+
+Old model on which all results are obtained, labels 1-16, poor thyroid performance.
+DM_defaults.DM_train_VISCERAL16_Fold1.final.2019-10-01.07.46.19.932616.model.ckpt
+

+ 2 - 0
segmentation/test/testChannels_CT.cfg

@@ -0,0 +1,2 @@
+D:\PhD\IPNU\PROSPECTIVE\1160_00\VISIT_0\Processed\_CT_0mean1std_notCropped_2mmVoxel.nii
+D:\PhD\IPNU\PROSPECTIVE\1160_00\VISIT_1\Processed\_CT_0mean1std_notCropped_2mmVoxel.nii

+ 1 - 0
segmentation/test/testChannels_CT.cfg.template

@@ -0,0 +1 @@
+__ct__

+ 58 - 0
segmentation/test/testConfig.cfg

@@ -0,0 +1,58 @@
+# -*- coding: utf-8 -*-
+#  Default values are set internally, if the corresponding parameter is not found in the configuration file.
+
+#  [Optional but highly suggested] The name will be used for naming folders to save the results in.
+#  Default: "testSession"
+sessionName = "currentSession"
+
+#  [Required] The main folder that the output will be placed.
+folderForOutput = "../output/"
+
+#  [Optional] Path to a saved model, to load parameters from in the beginning of the session. If one is also specified using the command line, the latter will be used.
+cnnModelFilePath = "../saved_models/DM_defaults.DM_train_VISCERAL16_Fold1.final.2019-10-01.07.46.19.932616.model.ckpt"
+
+#  +++++++++++ Input +++++++++++
+#  [Required] A list that should contain as many entries as the channels of the input image (eg multi-modal MRI). The entries should be paths to files. Those files should be listing the paths to the corresponding channels for each test-case. (see example files).
+channels = ["./testChannels_CT.cfg"]
+
+#  [Required] The path to a file, which should list names to give to the results for each testing case. (see example file).
+namesForPredictionsPerCase = "./testNamesOfPredictions.cfg"
+
+#  [Optional] The path to a file, which should list paths to the Region-Of-Interest masks for each testing case.
+#  If ROI masks are provided, inference will only be performed in within it (faster). If not specified, inference will be performed in whole volume.
+roiMasks = "./testRoiMasks.cfg"
+
+#  [Optional] The path to a file which should list paths to the Ground Truth labels of each testing case. If provided, DSC metrics will be reported. Otherwise comment out this entry.
+# gtLabels = "./testGtLabels_retmel.cfg"
+
+# [Optional] Batch size. Default: 10
+batchsize = 1
+
+#  +++++++++++Predictions+++++++++++
+#  [Optional] Specify whether to save segmentation map. Default: True
+saveSegmentation = True
+#  [Optional] Specify a list with as many entries as the task's classes. True/False to save/not the probability map for the corresponding class. Default: [True,True...for all classes]
+saveProbMapsForEachClass = [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False]
+
+# overlap=1 is lots of overlap (no step, gets stuck), overlap=0 is no overlap
+overlap = 0.0
+#  +++++++++++Feature Maps+++++++++++
+#  [Optionals] Specify whether to save the feature maps in separate files and/or all together in a 4D image. Default: False for both cases.
+#saveIndividualFms = True
+#saveAllFmsIn4DimImage = False
+
+#  [Optionals] A model may have too many feature maps, and some may not be needed. For this, we allow specifying which FMs to save. 
+#  Specify for each type of pathway (normal/subsampled/FC), a list with as many sublists as the layers of the pathway.
+#  Each sublist (one for each layer), should have 2 numbers. These are the minimum (inclusive) and maximum (exclusive) indices of the Feature Maps that we wish to save from the layer.
+#  The preset example saves the Feature Maps from index 0 (first FM) to 150 of the last hidden FC layer, before the classification layer.
+#  Default: [] for all.
+#minMaxIndicesOfFmsToSaveFromEachLayerOfNormalPathway = []
+#minMaxIndicesOfFmsToSaveFromEachLayerOfSubsampledPathway = [[],[],[],[],[],[],[],[]]
+#minMaxIndicesOfFmsToSaveFromEachLayerOfFullyConnectedPathway = [[],[0,150],[]]
+
+
+#  ==========Generic=============
+#  [Optional] Pad images to fully convolve. Default: True
+padInputImagesBool = True
+
+

+ 59 - 0
segmentation/test/testConfig.cfg.template

@@ -0,0 +1,59 @@
+# -*- coding: utf-8 -*-
+#  Default values are set internally, if the corresponding parameter is not found in the configuration file.
+
+#  [Optional but highly suggested] The name will be used for naming folders to save the results in.
+#  Default: "testSession"
+sessionName = "currentSession"
+
+#  [Required] The main folder that the output will be placed.
+folderForOutput = "__workDir__/output"
+
+#  [Optional] Path to a saved model, to load parameters from in the beginning of the session. If one is also specified using the command line, the latter will be used.
+cnnModelFilePath = "/home/nixUser/software/src/irAEMM/segmentation/saved_models/__model__"
+
+#  +++++++++++ Input +++++++++++
+#  [Required] A list that should contain as many entries as the channels of the input image (eg multi-modal MRI). The entries should be paths to files. Those files should be listing the paths to the corresponding channels for each test-case. (see example files).
+#channels = ["./testChannels_CT.cfg"]
+channels = ["__workDir__/testChannels_CT.cfg"]
+
+#  [Required] The path to a file, which should list names to give to the results for each testing case. (see example file).
+namesForPredictionsPerCase = "__workDir__/testNamesOfPredictions.cfg"
+
+#  [Optional] The path to a file, which should list paths to the Region-Of-Interest masks for each testing case.
+#  If ROI masks are provided, inference will only be performed in within it (faster). If not specified, inference will be performed in whole volume.
+roiMasks = "__workDir__/testRoiMasks.cfg"
+
+#  [Optional] The path to a file which should list paths to the Ground Truth labels of each testing case. If provided, DSC metrics will be reported. Otherwise comment out this entry.
+# gtLabels = "./testGtLabels_retmel.cfg"
+
+# [Optional] Batch size. Default: 10
+batchsize = 1
+
+#  +++++++++++Predictions+++++++++++
+#  [Optional] Specify whether to save segmentation map. Default: True
+saveSegmentation = True
+#  [Optional] Specify a list with as many entries as the task's classes. True/False to save/not the probability map for the corresponding class. Default: [True,True...for all classes]
+saveProbMapsForEachClass = [False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False]
+
+# overlap=1 is lots of overlap (no step, gets stuck), overlap=0 is no overlap
+overlap = 0.0
+#  +++++++++++Feature Maps+++++++++++
+#  [Optionals] Specify whether to save the feature maps in separate files and/or all together in a 4D image. Default: False for both cases.
+#saveIndividualFms = True
+#saveAllFmsIn4DimImage = False
+
+#  [Optionals] A model may have too many feature maps, and some may not be needed. For this, we allow specifying which FMs to save. 
+#  Specify for each type of pathway (normal/subsampled/FC), a list with as many sublists as the layers of the pathway.
+#  Each sublist (one for each layer), should have 2 numbers. These are the minimum (inclusive) and maximum (exclusive) indices of the Feature Maps that we wish to save from the layer.
+#  The preset example saves the Feature Maps from index 0 (first FM) to 150 of the last hidden FC layer, before the classification layer.
+#  Default: [] for all.
+#minMaxIndicesOfFmsToSaveFromEachLayerOfNormalPathway = []
+#minMaxIndicesOfFmsToSaveFromEachLayerOfSubsampledPathway = [[],[],[],[],[],[],[],[]]
+#minMaxIndicesOfFmsToSaveFromEachLayerOfFullyConnectedPathway = [[],[0,150],[]]
+
+
+#  ==========Generic=============
+#  [Optional] Pad images to fully convolve. Default: True
+padInputImagesBool = True
+
+

+ 2 - 0
segmentation/test/testNamesOfPredictions.cfg

@@ -0,0 +1,2 @@
+1160_00-VISIT_0.nii
+1160_00-VISIT_1.nii

+ 1 - 0
segmentation/test/testNamesOfPredictions.cfg.template

@@ -0,0 +1 @@
+__seg__

+ 2 - 0
segmentation/test/testRoiMasks.cfg

@@ -0,0 +1,2 @@
+D:\PhD\IPNU\PROSPECTIVE\1160_00\VISIT_0\Processed\_patientmask_notCropped_2mmVoxel.nii.gz
+D:\PhD\IPNU\PROSPECTIVE\1160_00\VISIT_1\Processed\_patientmask_notCropped_2mmVoxel.nii.gz

+ 1 - 0
segmentation/test/testRoiMasks.cfg.template

@@ -0,0 +1 @@
+__roi__

+ 504 - 28
slicerModule/iraemmBrowser.py

@@ -5,6 +5,7 @@ from slicer.ScriptedLoadableModule import *
 import slicerNetwork
 import loadDicom
 import json
+import datetime
 
 #
 # labkeySlicerPythonExtension
@@ -43,14 +44,15 @@ class iraemmBrowserWidget(ScriptedLoadableModuleWidget):
     # Instantiate and connect widgets ...
     self.network=slicerNetwork.labkeyURIHandler()
 
-    fconfig=os.path.join(os.path.expanduser('~'),'.labkey','onko-nix.json')
+    fconfig=os.path.join(os.path.expanduser('~'),'.labkey','network.json')
     self.network.parseConfig(fconfig)
     self.network.initRemote()
     self.project="iPNUMMretro/Study"
     self.dataset="Imaging1"
+    self.reviewDataset="ImageReview"
+    self.aeDataset="PET"
+    self.segmentList=['liver','bowel','thyroid','lung','kidney','pancreas']
 
-
-    self.logic=iraemmBrowserLogic(self)
     
 
     ds=self.network.filterDataset(self.project,self.dataset,[])
@@ -59,42 +61,137 @@ class iraemmBrowserWidget(ScriptedLoadableModuleWidget):
 
     
     #
-    # Parameters Area
+    # Setup Area
+    #
+    setupCollapsibleButton = ctk.ctkCollapsibleButton()
+    setupCollapsibleButton.text = "Setup"
+    self.layout.addWidget(setupCollapsibleButton)
+    
+    setupFormLayout = qt.QFormLayout(setupCollapsibleButton)
+
+    self.participantField=qt.QLabel("PatientId")
+    setupFormLayout.addRow("Participant field:",self.participantField)
+    
+    self.ctField=qt.QLabel("ctResampled")
+    setupFormLayout.addRow("Data field (CT):",self.ctField)
+
+    self.petField=qt.QLabel("petResampled")
+    setupFormLayout.addRow("Data field (PET):",self.petField)
+
+    self.segmentationField=qt.QLabel("Segmentation")
+    setupFormLayout.addRow("Data field (Segmentation):",self.segmentationField)
+    
+    self.logic=iraemmBrowserLogic(self)
+
+
+    #
+    # Patients Area
     #
-    connectionCollapsibleButton = ctk.ctkCollapsibleButton()
-    connectionCollapsibleButton.text = "Patients"
-    self.layout.addWidget(connectionCollapsibleButton)
+    patientsCollapsibleButton = ctk.ctkCollapsibleButton()
+    patientsCollapsibleButton.text = "Patients"
+    self.layout.addWidget(patientsCollapsibleButton)
 
-    connectionFormLayout = qt.QFormLayout(connectionCollapsibleButton)
+    patientsFormLayout = qt.QFormLayout(patientsCollapsibleButton)
 
 
     self.patientList=qt.QComboBox()
     for id in ids:
         self.patientList.addItem(id)
     self.patientList.currentIndexChanged.connect(self.onPatientListChanged)
-    connectionFormLayout.addRow("Patient:",self.patientList)
+    patientsFormLayout.addRow("Patient:",self.patientList)
 
     self.visitList=qt.QComboBox()
     self.visitList.currentIndexChanged.connect(self.onVisitListChanged)
-    connectionFormLayout.addRow("Visit:",self.visitList)
+    patientsFormLayout.addRow("Visit:",self.visitList)
 
-    self.ctCode=qt.QLabel("ctCode")
-    connectionFormLayout.addRow("CT:",self.ctCode)
 
+    self.ctCode=qt.QLabel("ctCode")
+    patientsFormLayout.addRow("CT:",self.ctCode)
+    
     self.petCode=qt.QLabel("petCode")
-    connectionFormLayout.addRow("PET:",self.petCode)
+    patientsFormLayout.addRow("PET:",self.petCode)
+
+    self.segmentationCode=qt.QLabel("segmentationCode")
+    patientsFormLayout.addRow("Segmentation",self.segmentationCode)
 
     self.patientLoad=qt.QPushButton("Load")
     self.patientLoad.clicked.connect(self.onPatientLoadButtonClicked)
-    connectionFormLayout.addRow("Load patient",self.patientLoad)
+    patientsFormLayout.addRow("Load patient",self.patientLoad)
 
+    self.patientClear=qt.QPushButton("Clear")
+    self.patientClear.clicked.connect(self.onPatientClearButtonClicked)
+    patientsFormLayout.addRow("Clear patient",self.patientClear)
+    
     self.keepCached=qt.QCheckBox("keep Cached")
     self.keepCached.setChecked(1)
-    connectionFormLayout.addRow("Keep cached",self.keepCached)
+    patientsFormLayout.addRow("Keep cached",self.keepCached)
     
     #set to a defined state
     self.onPatientListChanged(0)
 
+
+    #
+    # Review Area
+    #
+    reviewCollapsibleButton = ctk.ctkCollapsibleButton()
+    reviewCollapsibleButton.text = "Review"
+    self.layout.addWidget(reviewCollapsibleButton)
+    
+    self.reviewBoxLayout = qt.QVBoxLayout(reviewCollapsibleButton)
+
+    self.reviewFormLayout = qt.QFormLayout()
+
+
+    self.reviewSegment=qt.QComboBox()
+    self.reviewSegment.currentIndexChanged.connect(self.onReviewSegmentChanged)
+    self.reviewFormLayout.addRow("Selected region:",self.reviewSegment)
+    
+   
+
+    self.reviewResult=qt.QComboBox()
+    self.reviewFormLayout.addRow("What do you think about the segmentation:",\
+            self.reviewResult)
+    reviewOptions=['Select','Excellent','Minor deficiencies',\
+            'Major deficiencies','Unusable']
+    for opt in reviewOptions:
+        self.reviewResult.addItem(opt)
+    
+    self.aeResult=qt.QComboBox()
+    self.reviewFormLayout.addRow("Is organ suffering from adverse effect?",\
+            self.aeResult)
+    aeOptions=['Select','Yes','No']
+    for opt in aeOptions:
+        self.aeResult.addItem(opt)
+    #self.aeResult.setCurrentIndex(0)
+
+    self.updateReview=qt.QPushButton("Save")
+    self.reviewFormLayout.\
+            addRow("Save segmentation and AE decision for current segment",\
+            self.updateReview)
+    self.updateReview.clicked.connect(self.onUpdateReviewButtonClicked)
+
+    self.reviewBoxLayout.addLayout(self.reviewFormLayout)
+
+    submitFrame=qt.QGroupBox("Submit data")
+    
+    self.submitFormLayout=qt.QFormLayout()
+
+    self.reviewComment=qt.QTextEdit("this is a test")
+    self.submitFormLayout.addRow("Comments (optional)",\
+            self.reviewComment)
+    
+    self.submitReviewButton=qt.QPushButton("Submit")
+    self.submitFormLayout.addRow("Submit to database",\
+            self.submitReviewButton)
+    self.submitReviewButton.clicked.connect(self.onSubmitReviewButtonClicked)
+    
+    submitFrame.setLayout(self.submitFormLayout)
+    submitFrame.setFlat(1)
+    #submitFrame.setFrameShape(qt.QFrame.StyledPanel)
+    #submitFrame.setFrameShadow(qt.QFrame.Sunken)
+    submitFrame.setStyleSheet("background-color:rgba(220,215,180,45)")
+    self.reviewBoxLayout.addWidget(submitFrame)
+
   def onPatientListChanged(self,i):
       idFilter={'variable':'PatientId','value':self.patientList.currentText,'oper':'eq'}
       ds=self.network.filterDataset(self.project,self.dataset, [idFilter])
@@ -111,28 +208,149 @@ class iraemmBrowserWidget(ScriptedLoadableModuleWidget):
       except IndexError:
         return
       print("Visit: Selected item: {}->{}".format(i,s))
-      idFilter={'variable':'PatientId','value':self.patientList.currentText,'oper':'eq'}
+      idFilter={'variable':'PatientId',\
+              'value':self.patientList.currentText,'oper':'eq'}
       sFilter={'variable':'SequenceNum','value':s,'oper':'eq'}
       ds=self.network.filterDataset(self.project,self.dataset,[idFilter,sFilter])
       if not len(ds['rows'])==1:
-          print("Found incorrect number {} of matches for [{}]/[{}]".format(len(ds['rows']),\
+          print("Found incorrect number {} of matches for [{}]/[{}]".\
+                  format(len(ds['rows']),\
                   self.patientList.currentText,s))
       row=ds['rows'][0]
 
       #copy row properties for data access
       self.currentRow=row
-      self.petCode.setText(row['petResampled'])
-      self.ctCode.setText(row['ctResampled'])
-
+      self.petCode.setText(row[self.petField.text])
+      self.ctCode.setText(row[self.ctField.text])
+      self.segmentationCode.setText(row[self.segmentationField.text])
 
   def onPatientLoadButtonClicked(self):
       print("Load")
       #delegate loading to logic
       #try:
       self.logic.loadImage(self.currentRow,self.keepCached.isChecked())
+      segmentList=self.logic.compileSegmentation()
+      #also bladder,vertebraL1, stomach, heart
+      for seg in segmentList:
+          if not seg in self.segmentList:
+              continue
+          #filter to most important ones
+          self.reviewSegment.addItem(seg)
+      self.logic.loadReview(self.currentRow)
+      self.logic.loadAE(self.currentRow)
+      for segment in self.segmentList:
+          rIdx=self.logic.getReviewResult(segment)
+          aIdx=self.logic.getAEResult(segment)
+          print("Segment {}: {}/{}".format(segment,rIdx,aIdx))
+          try:
+              if (rIdx+aIdx)>0:
+                  self.updateResult(segment,rIdx,aIdx)
+          except TypeError:
+              continue
+      try:
+          self.reviewComment.setPlainText(self.logic.reviewComment)
+      except AttributeError:
+          pass
+
+      self.onReviewSegmentChanged()
       #except AttributeError:
       #    print("Missing current row")
       #    return
+  
+  def onReviewSegmentChanged(self):
+      segment=self.reviewSegment.currentText
+      self.logic.hideSegments()
+      self.logic.showSegment(segment)
+      #set reviewFlag to stored value
+      self.reviewResult.setCurrentIndex(self.logic.getReviewResult(segment))
+      self.aeResult.setCurrentIndex(self.logic.getAEResult(segment))
+
+  def onSubmitReviewButtonClicked(self):
+      print("Submit")
+      print("Selected review:{}/{}".format(self.reviewResult.currentIndex,
+          self.reviewResult.currentText))
+      print("Comment:{}".format(self.reviewComment))
+      self.logic.submitReview(self.currentRow,\
+              self.reviewComment.plainText)
+      self.logic.submitAE(self.currentRow)
+
+  def onUpdateReviewButtonClicked(self):
+      print("Save")
+      
+      segment=self.reviewSegment.currentText
+      self.logic.updateReview(segment,\
+              self.reviewResult.currentIndex)
+
+      self.logic.updateAE(segment,\
+              self.aeResult.currentIndex)
+      self.updateResult(segment,self.reviewResult.currentIndex,\
+              self.aeResult.currentIndex)
+
+     
+  def updateResult(self,segment,reviewResult,aeResult):
+      reviewText=self.reviewResult.itemText(reviewResult)
+      aeText=self.aeResult.itemText(aeResult)
+
+      idx=self.findCompletedSegment(segment)
+
+      if idx<0:
+          qReview=qt.QLabel(reviewText)
+          self.submitFormLayout.insertRow(0,segment,qReview)
+          qAE=qt.QLabel(aeText)
+          self.submitFormLayout.insertRow(1,segment+'AE',qAE)
+          try:
+              self.segmentsCompleted.append(segment)
+              self.segmentsCompleted.append(segment+'AE')
+          except AttributeError:
+              self.segmentsCompleted=[]
+              self.segmentsCompleted.append(segment)
+              self.segmentsCompleted.append(segment+'AE')
+      else:
+          qReview=self.submitFormLayout.itemAt(idx,1).widget()
+          qReview.setText(reviewText)
+          qAE=self.submitFormLayout.itemAt(idx+1,1).widget()
+          qAE.setText(aeText)
+
+      reviewColors=['pink','green','yellow','orange','red']
+      qReview.setStyleSheet("background-color: "+reviewColors[reviewResult])
+      aeColors=['pink','red','green']
+      qAE.setStyleSheet("background-color: "+aeColors[aeResult])
+
+
+
+  def findCompletedSegment(self,segment):
+
+      for i in range(self.submitFormLayout.rowCount()):
+          if self.submitFormLayout.itemAt(i,0).widget().text==segment:
+              return i
+      return -1
+
+  def removeCompletedSegments(self):
+      
+      try:
+          segments=self.segmentsCompleted
+      except AttributeError:
+          return 
+
+      for seg in segments:
+          idx=self.findCompletedSegment(seg)
+          if idx>-1:
+              self.submitFormLayout.removeRow(idx)
+      
+      self.segmentsCompleted=[]
+
+
+  def onPatientClearButtonClicked(self):
+      self.logic.clearVolumesAndSegmentations()
+      self.reviewSegment.clear()
+      self.removeCompletedSegments()
+      self.reviewComment.clear()
+
+      
+
+  
+
+
 
   def cleanup(self):
     pass
@@ -157,6 +375,16 @@ class iraemmBrowserLogic(ScriptedLoadableModuleLogic):
           self.parent=parent
           self.net=parent.network
           self.project=parent.project
+          self.participantField=parent.participantField.text
+          self.segmentList=parent.segmentList
+
+      self.segLabel={'1':'liver','2':'spleen','3':'lung','4':'thyroid',\
+              '5':'kidney','6':'pancreas','7':'gallbladder','8':'bladder',\
+              '9':'aorta','10':'trachea','11':'sternum','12':'vertebraL1',\
+              '13':'adrenal','14':'psoasMajor','15':'rectus',\
+              '16':'bowel','17':'stomach','18':'heart'}
+
+      
 
   def setLabkeyInterface(self,net):
       #additional way of setting the labkey network interface 
@@ -170,29 +398,277 @@ class iraemmBrowserLogic(ScriptedLoadableModuleLogic):
       
       
       #fields={'ctResampled':True,'petResampled':False}
-      fields=['ctResampled','petResampled']
+      fields={"CT":self.parent.ctField.text,\
+              "PET":self.parent.petField.text,\
+              "Segmentation":self.parent.segmentationField.text}
 
       relativePaths={x:self.project+'/@files/preprocessedImages/'\
-             +row['patientCode']+'/'+row['visitCode']+'/'+row[x]\
-             for x in fields}
+             +row['patientCode']+'/'+row['visitCode']+'/'+row[y]\
+             for (x,y) in fields.items()}
 
-      volumeNode={}
+      self.volumeNode={}
       for f in relativePaths:
           p=relativePaths[f]
           labkeyPath=self.net.GetLabkeyPathFromRelativePath(p)
           rp=self.net.head(labkeyPath)
-          if not rp.code==200:
+          if not slicerNetwork.labkeyURIHandler.HTTPStatus(rp):
               print("Failed to get {}".format(labkeyPath))
               continue
 
           #pushes it to background
-          volumeNode[f]=self.net.loadNode(p,'VolumeFile',returnNode=True,keepCached=keepCached)
+          properties={}
+          #make sure segmentation gets loaded as a labelmap
+          if f=="Segmentation":
+              properties["labelmap"]=1
+
+          self.volumeNode[f]=self.net.loadNode(p,'VolumeFile',\
+                  properties=properties,returnNode=True,keepCached=keepCached)
+
+      #mimic abdominalCT standardized window setting
+      self.volumeNode['CT'].GetScalarVolumeDisplayNode().\
+              SetWindowLevel(1400, -500)
+      #set colormap for PET to PET-Heat (this is a verbatim setting from
+      #the Volumes->Display->Lookup Table colormap identifier)
+      self.volumeNode['PET'].GetScalarVolumeDisplayNode().\
+              SetAndObserveColorNodeID(\
+              slicer.util.getNode('PET-Heat').GetID())
+      slicer.util.setSliceViewerLayers(background=self.volumeNode['CT'],\
+          foreground=self.volumeNode['PET'],foregroundOpacity=0.1,fit=True)
+
+
+  #segmentations
+
+  def compileSegmentation(self):
+      try:
+          labelmapVolumeNode = self.volumeNode['Segmentation']
+      except KeyError:
+          print("No segmentaion volumeNode available")
+          return
+     
+      self.segmentationNode = slicer.mrmlScene.AddNewNodeByClass('vtkMRMLSegmentationNode')
+      slicer.modules.segmentations.logic().\
+              ImportLabelmapToSegmentationNode(labelmapVolumeNode, self.segmentationNode)
+      
+      
+      segmentList=[]
+      
+      seg=self.segmentationNode.GetSegmentation()
+
+      for i in range(seg.GetNumberOfSegments()):
+          segment=seg.GetNthSegment(i)
+          segment.SetName(self.segLabel[segment.GetName()])
+          segmentList.append(segment.GetName())
+
+      #seg.CreateClosedSurfaceRepresentation()
+      slicer.mrmlScene.RemoveNode(labelmapVolumeNode)
+      self.volumeNode.pop('Segmentation',None)
+
+      #return list of segment names
+      return segmentList
+  
+  def hideSegments(self):
+
+      try:
+          displayNode=self.segmentationNode.GetDisplayNode()
+      except AttributeError:
+          return
+
+      seg=self.segmentationNode.GetSegmentation()
+      for i in range(seg.GetNumberOfSegments()):
+          #segment=self.segmentationNode.GetSegmentation().GetNthSegment(i)
+          segmentID=seg.GetNthSegmentID(i)
+          displayNode.SetSegmentVisibility(segmentID, False)
+      #print("Done")
+  
+  def showSegment(self,name):
+      try:
+          displayNode=self.segmentationNode.GetDisplayNode()
+      except AttributeError:
+          return
+      
+      seg=self.segmentationNode.GetSegmentation()
+      for i in range(seg.GetNumberOfSegments()):
+          segment=seg.GetNthSegment(i)
+          if not segment.GetName()==name:
+              continue
+          segmentID=seg.GetNthSegmentID(i)
+          displayNode.SetSegmentVisibility(segmentID, True)
+          break
+      #print("Done")
+  
+
+  #clear 
+
+  def clearVolumesAndSegmentations(self):
+      nodes=slicer.util.getNodesByClass("vtkMRMLVolumeNode")
+      nodes.extend(slicer.util.getNodesByClass("vtkMRMLSegmentationNode"))
+      res=[slicer.mrmlScene.RemoveNode(f) for f in nodes] 
+      self.segmentationNode=None
+      self.reviewResult={}
+      self.aeList={}
+
+  #reviews by segment
+
+  def updateReview(self,segment,value):
+      try:
+          self.reviewResult[segment]=value
+      except AttributeError:
+          self.reviewResult={}
+          self.updateReview(segment,value)
+  
+  def getReviewResult(self,segment):
+      try:
+          return self.reviewResult[segment]
+      except AttributeError:
+          #review result not initialized
+          return 0
+      except KeyError:
+          #segment not done yet
+          return 0
+  
+  #load review from labkey
+
+  def getUniqueRows(self, project, dataset, fields, inputRow):
+      filters=[]
+
+      for f in fields:
+          filters.append({'variable':f,'value':str(inputRow[f]),'oper':'eq'})
+      
+      
+      ds=self.net.filterDataset(project,dataset,filters)
+      return ds['rows']
+
+  def loadReview(self,currentRow):
+
+      #see if we have already done a review
+      rows=self.getUniqueRows(self.parent.project,self.parent.reviewDataset,\
+              [self.participantField,'visitCode','Segmentation'],currentRow)
+
+      if len(rows)==0:
+          return
+
+      row=rows[0]
+      for label in self.segmentList:
+          #name=self.segLabel[label]+'Review'
+          name=label+'Review'
+          try:
+              self.updateReview(label,row[name])
+          except KeyError:
+              continue 
+      self.reviewComment=row['reviewComment']
+
+  #submit review to labkey
+  def submitReview(self,currentRow,comment):
+      fields=[self.participantField,'visitCode','Segmentation']
+      rows=self.getUniqueRows(self.parent.project,self.parent.reviewDataset,\
+              fields,currentRow)
+
+
+      mode='update'
+      
+      if len(rows)==0:
+          mode='insert'
+          row={}
+          for f in fields:
+              row[f]=currentRow[f]
+          
+          frows=self.getUniqueRows(self.parent.project,self.parent.reviewDataset,\
+                  [self.participantField,'visitCode'],currentRow)
+
+          row['SequenceNum']=currentRow['SequenceNum']+0.01*len(frows)
+      
+      else:     
+          row=rows[0]
+      
+      seg=self.segmentationNode.GetSegmentation()
+ 
+      for i in range(seg.GetNumberOfSegments()):
+          segment=seg.GetNthSegment(i)
+          fieldName=segment.GetName()+'Review'
+          value=self.getReviewResult(segment.GetName())
+          row[fieldName]=value
+
+      row['reviewComment']=comment
+      row['Date']=datetime.datetime.now().ctime()
+      self.net.modifyDataset(mode,self.parent.project,\
+              self.parent.reviewDataset,[row])
+      
+      print("review submitted")
+
+
+#AE management
+  def updateAE(self,segment,value):
+      try:
+          self.aeList[segment]=value
+      except AttributeError:
+          self.aeList={}
+          self.updateAE(segment,value)
+  
+  def getAEResult(self,segment):
+      try:
+          return self.aeList[segment]
+      except AttributeError:
+          #review result not initialized (unknown)
+          return 0
+      except KeyError:
+          #segment not done yet (unknown)
+          return 0
+  
+
+
+  def loadAE(self,currentRow):
+      fields=[self.participantField,'petResampled']
+      rows=self.getUniqueRows(self.parent.project,self.parent.aeDataset,\
+              fields,currentRow)
+      
+      if len(rows)==0:
+          return
+
+      print("Found {} rows".format(len(rows)))
+      row=rows[0]
+      
+      for seg in self.segmentList:
+          name=seg+'AE'
+          try:
+              self.updateAE(seg,row[name])
+          except AttributeError:
+              continue
+          except KeyError:
+              continue
+
+  def submitAE(self,currentRow):
+      fields=[self.participantField,'petResampled']
+      rows=self.getUniqueRows(self.parent.project,self.parent.aeDataset,\
+              fields,currentRow)
+      
+      
+      if len(rows)==0:
+          mode='insert'
+          row={}
+          for f in fields:
+              row[f]=currentRow[f]
           
-      slicer.util.setSliceViewerLayers(background=volumeNode['ctResampled'],\
-          foreground=volumeNode['petResampled'],foregroundOpacity=0.5,fit=True)
+          row['SequenceNum']=currentRow['SequenceNum']
+      
+      else:
+          mode='update'
+          row=rows[0]
+     
+      for seg in self.segmentList:
+          row[seg+'AE']=self.getAEResult(seg) 
+
+      row['Date']=datetime.datetime.now().ctime()
+      resp=self.net.modifyDataset(mode,self.parent.project,\
+              self.parent.aeDataset,[row])
+      print("Response {}".format(resp))
+      print("AE submitted")
+      
+      
+
 
 
 class irAEMMBrowserTest(ScriptedLoadableModuleTest):
+
   """
   This is the test case for your scripted module.
   Uses ScriptedLoadableModuleTest base class, available at:

+ 66 - 0
templates/segmentation.json.sample

@@ -0,0 +1,66 @@
+{
+ "setVariables":["__tempBase__","__segBase__","__roiFile__","__petFile__","__ctFile__","__segFile__","__modelName__"],
+ "setVariablesComment":"this variables will get updated with local values like home and can be used to set variables further on",
+ "__tempBase__":"__home__/temp/segmentation",
+ "__segBase__":"/home/nixUser/software/src/irAEMM/segmentation",
+ "__roiFile__":"testMask.nii.gz",
+ "__ctFile__":"testCT.nii.gz",
+ "__petFile__":"testPET.nii.gz",
+ "__segFile__":"segmentation.nii.gz",
+ "__modelName__":"DM_defaults.DM_train_VISCERAL16_Fold1.final.2019-10-01.07.46.19.932616.model.ckpt",
+ "tempBase":"__tempBase__",
+ "model":"__model__",
+ "project":"IPNUMMprospektiva/Study",
+ "targetSchema":"study",
+ "targetQuery":"Imaging1",
+ "participantField":"ParticipantId",
+ "imageDir":"preprocessedImages",
+ "images":{
+	"CT":{
+		"queryField":"ctResampled",
+		"tempFile":"__ctFile__"},
+	"PET":{
+		"queryField":"petResampled",
+		"tempFile":"__petFile__"},
+	"patientmask":{
+		"queryField":"ROImask",
+		"tempFile":"__roiFile__"}
+ },
+ "replacePattern":{
+	 "__workDir__":"__tempBase__",
+	 "__roi__":"__tempBase__/__roiFile__",
+	 "__pet__":"__tempBase__/__petFile__",
+	 "__ct__":"__tempBase__/__ctFile__",
+	 "__seg__":"__tempBase__/__segFile__",
+	 "__model__":"__modelName__"
+ },
+ "deepmedic": {
+	 "config":{
+		 "model":{
+		 	"template":"__segBase__/model/modelConfig.cfg.template",
+		 	"out":"__tempBase__/modelConfig.cfg"
+	 	},
+	 	"test":{
+			"template":"__segBase__/test/testConfig.cfg.template",
+		 	"out":"__tempBase__/testConfig.cfg"
+	 	},
+		"predictions":{
+			"template":"__segBase__/test/testNamesOfPredictions.cfg.template",
+			"out":"__tempBase__/testNamesOfPredictions.cfg"
+		},
+		"CT":{
+			"template":"__segBase__/test/testChannels_CT.cfg.template",
+			"out":"__tempBase__/testChannels_CT.cfg"
+		},
+		"ROI":{
+			"template":"__segBase__/test/testRoiMasks.cfg.template",
+			"out":"__tempBase__/testRoiMasks.cfg"
+		}
+
+
+
+	 }
+ }
+
+
+}