Kitware Sourcehttp://www.kitware.com
News and updates for VolView in Kitware SourceCopyright Kitware Inc.Tue, 02 Sep 2014 15:07:05 -0400Tue, 02 Sep 2014 15:07:05 -0400Kitware Newshttp://www.kitware.com/source/home/post/149
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Google Tango Data Visualization</strong></span></span></p>
<p>Kitware announced the release of a tutorial that describes how to use ParaView to extract, process, and visualize data from Google Project Tango development kits. The purpose of the tutorial is to promote faster development of the open Project Tango platform. The tutorial is published on Kitware&#39;s blog.</p>
<p>Using the plugin and visualization capabilities of ParaView, users can interact with their Project Tango data without having to write a single line of code. For those who do not have Project Tango Development kits, the tutorial provides links to download sample data.</p>
<p>The process starts with gathering data from the devices using the Android SDK. Then, this data is read into ParaView using a Tango-specific data plugin that is available as an open source download. The data is by nature 3D+time, meaning that it is a sequence of point clouds acquired over time. This data includes readings of the device position and orientation in 3D space, which indicate where the device was and the direction in which it was pointing at the moment a particular point cloud was acquired.</p>
<p>After being loaded into ParaView, the point clouds are aligned and stitched together. ParaView can then use its animation capabilities to display the intermediate point clouds and how they contribute to reconstruct a larger area of space around the device. Filters are available from the Visualization Toolkit and the Point Cloud Library for advanced analysis. Example data sets are available for download on the MIDAS platform, along with installers for the customized ParaView builds for Windows, Linux, and Mac.</p>
<p>Kitware will continue exploring the use of the Project Tango devices for engineering, simulation, measurement, and medical applications.</p>
<p><a href="/source/files/4_278157956.jpg" target="_blank"><img src="/source/files/Small.4_278157956.jpg" style="width: 400px; margin-left: 125px; margin-right: 125px;" /></a></p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>KWIVER WAMI Tracking System Released</strong></span></span></p>
<p>Kitware announced the immediate availability of its state-of-the-art Wide Area Motion Imagery (WAMI) tracking system on Forge.mil, as part of the Kitware Image &amp; Video Exploitation and Retrieval Toolkit (KWIVER). Full source code is available with unlimited rights, under the conditions of the DoD Community Source Usage Agreement, to anyone who can access Forge.mil.</p>
<p>Developed with funding from the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL), Kitware&rsquo;s WAMI tracker in KWIVER is capable of producing tracks in real-time from WAMI across a wide range of resolutions and frame rates, by dynamically distributing processing across a compute cluster. The resulting data assists intelligence analysts as they support military operations, both live and forensically. Recently, Kitware&#39;s WAMI tracker was successfully transitioned to theatre as part of the Air Force&#39;s intelligence, surveillance, and reconnaissance system, where it produced tracks on downlinked WAMI in real-time.</p>
<p>Forge.mil is the Department of Defense&rsquo;s (DoD) collaborative, government-open-source software hosting and development site supporting the technology development community. Membership in Forge.mil is available to DoD military and civilian employees and to DoD contractor personnel. By making Kitware&rsquo;s tracking system available on Forge.mil, Kitware is providing the DoD software development community the opportunity to review and use the operationally deployed software under a Government Purpose Rights license.</p>
<p>By placing its tracking technology on Forge.mil, Kitware is following its company-wide commitment to open-source software and scientific collaboration. Kitware intends to work with government and government contractor personnel to enhance the system and to help deploy its capabilities as part of other government projects. In addition to tracking technology, Kitware plans to add analytics to KWIVER from previous and ongoing government efforts. It is hoped that a software and algorithm development community will form around KWIVER, such that contributions from performers, government labs, and academia can be combined to create the best possible video analytics, freely available to everyone working with and within the US government.</p>
<p>This work is supported by DARPA and AFRL.</p>
<p>Approved for Public Release, Distribution Unlimited.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Lightning Talk Presented at GEOINT</strong></span></span></p>
<p>At the 10th annual Geospatial Intelligence (GEOINT) Symposium, Matt Turek, Kitware&#39;s Assistant Director of Computer Vision, presented new technology being developed at Kitware that addresses the challenges of analyzing crowdsourced multimedia. The lightning talk, &quot;Large-Scale Understanding of Crowdsourced Multimedia Relationships,&quot; detailed approaches that would enable analysts to more efficiently evaluate large-scale multimedia collections, such as YouTube videos, for salient information. Kitware&#39;s solutions center on automatic video grouping based on semantic concepts and interactive multimedia organization.</p>
<p>This technology represents some of the cutting-edge research being done by Kitware&#39;s Computer Vision team. In addition to providing solutions to the GEOINT community, Kitware has developed and deployed operational solutions to support other intelligence communities, including a WAMI tracker successfully transitioned to theatre.</p>
<p><a href="/source/files/4_1279805903.jpg" target="_blank"><img src="/source/files/Small.4_1279805903.jpg" style="width: 400px; margin-left: 125px; margin-right: 125px;" /></a></p>
<p style="text-align: center;"><em>Matt&#39;s talk focused on understanding Internet video relationships. In particular, he discussed complex events; people, places, and things; and indicators of population sentiment.</em></p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>New Capabilities Added to WAMI Utility</strong></span></span></p>
<p>Automatically detecting objects such as people and vehicles performing specific functions or complex activities is one of the most challenging problems in video analytics. Complex functional object recognition is required to detect objects that are defined by their behavior rather than their appearance, such as delivery trucks, police patrol vehicles, buses, and road cleaning vehicles. It is also used to detect complex threat patterns such as IED emplacement.</p>
<p>During Phase II of the project &quot;Vision with a Purpose: Inferring the Function of Objects in Video,&quot; Kitware created a prototype demonstration system to capture the power of human intuition by having users define complex functional objects&#39; components, as well as the components&#39; relationships, in a new graphical model. The model is then used to automatically and efficiently scan vast video scenes over long periods of time. In addition, the workflow enables users to share feedback to improve results. When example videos of functional objects are available, the system can also learn the core characteristics of their activities using machine learning techniques. The capability can be applied across diverse domains including wide-area motion imagery (WAMI), aerial full-motion video, and ground surveillance video.</p>
<p>The research and development conducted for the workflow has produced crucial state-of-the-art technologies for vision-based recognition of functional objects. The core capabilities of these technologies include learning functional object models from examples using machine learning techniques, detecting known functional objects using a complex activity model, detecting anomalous functional objects, modeling relationships between locations and movers, and analyzing multi-scale patterns of life.</p>
<p>The project&#39;s developments add new dimensions to the utility of WAMI videos and should play a crucial role in advancing the technologies utilized for national security and defense.</p>
<p>This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract Number W31P4Q-10-C-0262.</p>
<p>Approved for Public Release, Distribution Unlimited.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Point Cloud Processing Highlighted in Demonstration</strong></span></span></p>
<p>Casey Goodlett attended the Triangle Python Users Group Meeting held in Carrboro, NC, where he presented a talk on &quot;Python Scripting of ParaView with Application to Point Cloud Processing.&quot; The talk detailed use cases in point cloud processing to exemplify how Python can be used to implement domain-specific visualization and processing routines. More information on ParaView and point cloud processing can be accessed on ParaView&#39;s Wiki page.</p>
<p>As part of his talk, Casey presented a live demonstration that showed the use of Python and ParaView&#39;s Point Cloud Library plugin for the processing of Kinect data. In the video, ParaView is used to interactively build a pipeline of Point Cloud Library filters to segment objects sitting on a tabletop, as is shown in the image below.</p>
<p><a href="/source/files/4_928214936.jpg" target="_blank"><img src="/source/files/Small.4_928214936.jpg" style="width: 400px; margin-left: 125px; margin-right: 125px;" /></a></p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>NCIP Repositories Converted to Git</strong></span></span></p>
<p>Kitware announced the successful completion of its participation in the National Cancer Informatics Program (NCIP) Open-Development Initiative. Dr. Luis Ib&aacute;&ntilde;ez and Dr. Brad King led Kitware&#39;s efforts on this initiative, which included advising Leidos Biomedical Research, Inc., on converting its many public source-code repositories from Subversion (SVN) to Git and assisting in the transfer of the repositories to the GitHub hosting platform. Leidos Biomedical Research, Inc., formerly known as SAIC-Frederick, operates the Frederick National Laboratory for Cancer Research on behalf of the National Cancer Institute (NCI).</p>
<p>As part of this effort, 156 repositories were converted to Git and transferred to GitHub with 135 of these repositories located on the NCIP&#39;s main website. The remaining repositories can be found on remote sites, including the eXtensible Imaging Platform (XIP). The repositories contain tools that aid in biomedical research activities such as genome sequence analysis, enterprise-wide bio-banking, integration of translational research, radiologic imaging, and clinical trials management.</p>
<p>Dr. King led Kitware in converting projects related to the NCIP&#39;s caTissue platform to Git. In total, this platform consists of approximately 20 repositories.</p>
<p>The $77,629 used for the project was funded by NCI Contract No. HHSN261200800001E.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Three GSoC Projects Accepted for VTK</strong></span></span></p>
<p>Three of the accepted Google Summer of Code (GSoC) 2014 projects were proposed for the Visualization Toolkit (VTK).</p>
<p>&quot;Ensemble Vector Fields for VTK&quot; was proposed by Brad Eric Hollister, who is working with Kitware&#39;s Berk Geveci. To add visualization support for ensemble vector fields, Brad plans on performing several tasks for the project. The first is &quot;loading ensemble data from the NetCDF file format. The second is computation of finite-time variance analysis (FTVA) for an EVF. The third is to provide visualization of clusters in the variance data from FTVA. Lastly, as an optional item if time permits, is the inclusion of TRACLUS, a trajectory clustering algorithm for use within VTK.&quot;</p>
<p>&quot;Extensions for Geospatial and Climate based visualizations in VTK&quot; was proposed by Jatin Parekh. For the project, Jatin is working with Kitware&#39;s Aashish Chaudhary. The goal of the project involves &quot;adding a few extensions to the VTK library to support Geospatial and Climate based visualizations in the VTK library. The proposed work can be divided into four tasks: 1) Add new features to existing filters such as neat labelling of the contours. 2) New readers to read LIDAR dataset and GeoJSON geometry data into VTK. 3) Improve handling of time in VTK. 4) Implement tile rendering in VTK (optional - time permitting).&quot;</p>
<p>&quot;Supporting a Visualization Grammar&quot; was proposed by Marco Cecchetti. The goal of the project is to &quot;provide the ability to create plots and charts with the VTK framework by a simple declarative language. In order to achieve this goal a visualization grammar similar to the one utilized by the Vega JavaScript library will be mapped to VTK data structures and to specific classes that will be designed for supporting geometry objects and marks.&quot; Marco is working with Kitware&#39;s Jeff Baumes for the project.</p>
<p>For more information on these projects and details regarding VTK&#39;s participation in the program, please visit the Google Summer of Code 2014 website.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>New .Org Websites Unveiled</strong></span></span></p>
<p>Kitware announced the rollout of the new ParaView, Open Chemistry, and Tangelo websites. The website designs are intended to better serve the ParaView, Open Chemistry, and Tangelo communities by making it easier to provide more dynamic and up-to-date information, being mobile-friendly, and helping new users become familiar with and start utilizing the software.</p>
<p>Each website includes a full-page examples/image gallery and a quick access download button on the main page.</p>
<p>ParaView, Open Chemistry, and Tangelo are the first of the open-source project websites that Kitware manages to be transitioned to the new design. Kitware is continuing the process of transitioning additional websites and will announce their rollouts as they occur.</p>
<p>To provide feedback on the updated websites, please contact comm@kitware.com.</p>
<p><a href="/source/files/4_1524492786.jpg" target="_blank"><img src="/source/files/Small.4_1524492786.jpg" style="width: 400px; margin-left: 125px; margin-right: 125px;" /></a></p>
<p><strong>ParaView Spotlight: Show off your vis!</strong></p>
<p>We&rsquo;re looking for exciting new visualizations (images or movies) to include with the ParaView showcase we&rsquo;re hosting at SC14 and the new paraview.org. Show off how exciting your data and research is, and have your visualization featured! To make it in time for SC14, we&rsquo;ll need all submissions by October 1st. For details on how to submit, including specs, check out the &ldquo;what&rsquo;s new&rdquo; feature on paraview.org.</p>
<p>To make a contribution to the new paraview.org, please e-mail dave.demarle@kitware.com and comm@kitware.com.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Opensource.com Community Awards Received by Members of Kitware</strong></span></span></p>
<p>Kitware is pleased to congratulate Luis Ib&aacute;&ntilde;ez and Marcus D. Hanwell on their Opensource.com Community Awards.</p>
<p>Luis won a Reader&#39;s Choice Award for his article &quot;University course trades textbook for Raspberry Pi.&quot; The award is granted to an article written in 2013 that is determined by vote to be an Opensource.com community &quot;favorite.&quot; The article describes how the use of Raspberry Pi has taken the place of textbooks in the course Information in the 21st Century at the State University of New York at Albany. Luis also received a Social Sharer Award for his excellence in sharing Opensource.com articles online.</p>
<p>Marcus won a Conversation Starter Award for his achievement in beginning conversations on Opensource.com posts. Marcus became an Opensource.com Community Moderator earlier this year. As a member of the Opensource.com community, Marcus writes about a diversity of topics including open-source tools, events, and publications.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Kitware Actively Involved at SciPy 2014</strong></span></span></p>
<p>Kitware attended the 13th annual Scientific Computing and Python (SciPy) conference, which occurred from July 6 to July 12, 2014. The conference, hosted by members of academic, commercial, and government organizations, was dedicated to the &quot;advancement of scientific computing through open-source Python software for mathematics, science, and engineering.&quot;</p>
<p>Not only was Kitware a Silver sponsor of the event, but Matt McCormick and Aashish Chaudhary served as members of the SciPy Program Committee.</p>
<p>In addition, Kitware demonstrated its diverse scientific computing capabilities in areas such as geospatial dataset visualization, climate model analysis, and cross-platform builds across HPC, desktop, and mobile platforms by participating in several activities including presentations, developer sprints, and Birds-of-a-Feather (BOF) sessions.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Hackathon for Dermatology Image Repository Hosted at Kitware</strong></span></span></p>
<p>In collaboration with the Memorial Sloan Kettering Cancer Center (MSKCC) and the IBM Watson Research Center, Kitware hosted a Hackathon on Large-Scale Dermatology Image Repository earlier this year.</p>
<p>MSKCC - the world&#39;s oldest and largest private cancer center - has devoted more than 130 years to exceptional patient care, innovative research, and outstanding educational programs.In this collaboration, MSKCC is pursuing a large-scale data sharing initiative through which thousands of dermatological images will be publicly shared with the goal of furthering cancer research. This large collection of images will include annotations and metadata that will empower researchers to apply data analysis techniques to better understand the onset and evolution of skin cancer. IBM Research is playing a scientific role in researching algorithms to support the diagnosis of melanoma detection.</p>
<p>Among the topics discussed at the hackathon were overarching plans for data sharing, software platforms for the image repository, software tools for image annotation, methodologies for detecting skin cancer by computing changes between images and baselines, integrating images into Electronic Medical Records (EMR), and licensing terms to ensure that data for the Dermatological Image Repository will be made available in the Public Domain.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Kitware Participates in Bike to Work Challenge</strong></span></span></p>
<p>Kitware once again brought home the Gearhead trophy for highest participation (for organizations with more than 10 employees) in the Saratoga Bike to Work Challenge, which occurred on May 16, 2014. In total, 14 members of Kitware participated in the challenge.</p>
<p>In addition, Chris Harris won the Fork in the Road trophy for the longest commute at 49 miles round trip.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Tutorials, Talks, and More Presented at CVPR 2014</strong></span></span></p>
<p>Kitware actively participated in the Computer Vision and Pattern Recognition (CVPR) 2014 Conference, which was held from June 23 to June 28, 2014. &nbsp;</p>
<p>Dr. Anthony Hoogs presented a talk titled &quot;Video Scene Segmentation and Recognition by Location-Independent Activity Classes&quot; at the Workshop on Perceptual Organization in Computer Vision. This is a workshop on segmentation.</p>
<p>Eran Swears presented a poster, which covered the paper &quot;Complex Activity Recognition using Granger Constrained DBN (GDBN) in Sports and Surveillance Video,&quot; during the main conference. He also presented his ICCV 2013 paper, &quot;Pyramid Coding for Functional Scene Element Recognition in Video Scenes,&quot; as a poster at the Scene Understanding Workshop.</p>
<p>In addition, Eran presented his dissertation at the Doctoral Consortium, a competitive program where Ph.D. students who are close to graduation are paired with senior<br />
researchers for career and technical mentoring.</p>
<p>Sangmin Oh, in collaboration with Computer Vision partners, provided a half-day tutorial on event and action recognition, and members of Kitware presented official demonstrations on &quot;Complex Activity Recognition Algorithms&quot; and &quot;Function Scene ELement Recognition Algorithms.&quot;</p>
<p>Furthermore, during the Vision Entrepreneurs Workshop (VIEW), Kitware demonstrated its Collaborative Computer Vision R&amp;D. This demonstration highlighted the Kitware Image and Video Exploitation and Retrieval Toolkit (KWIVER).</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Slicer Extensions Presented at NA-MIC Summer Project Week</strong></span></span></p>
<p>Kitware presented an overview of the Slicer extensions usage and contribution process during the 19th NA-MIC Summer Project Week. The presentation included an interactive demonstration of the creation of a Python-based Slicer extension using the new ExtensionWizard that Kitware developed, which greatly simplifies the Slicer extension creation and distribution process.</p>
<p>Slicer extensions provide easy access to industry-leading algorithms and interfaces that optimize Slicer for specific use cases.&nbsp; Currently, over 50 extensions have been contributed to Slicer. These extensions are available for Linux, MaxOSX, and Windows. Some of the available Slicer extensions are Airway Segmentation, Atlas-based Brain Segmentation, and DTI Fiber Viewer. &nbsp;</p>
<p>Future Slicer developments include methods for automatically receiving updates to extensions, the ability to automatically install extensions when new versions of Slicer are installed, and the ability to automatically create a Wiki page that describes each contributed extension.</p>
<p><a href="/source/files/4_638791185.jpg" target="_blank"><img src="/source/files/Small.4_638791185.jpg" style="width: 400px; margin-left: 125px; margin-right: 125px;" /></a></p>
<p>In addition, as part of the project week, the National Alliance for Medical Image Computing (NA-MIC) community hosted a tutorial contest. The winning tutorial was selected based on the criteria that it introduced a method that has been contributed to Slicer and that it effectively showed others how to apply the new method to medical image analysis.</p>
<p>This year&#39;s winner was the &quot;Cardia Agatston Scoring Tutorial&quot; by Jessica Forbes and Hans Johnson. The winners will receive a $250 award, which is sponsored by Kitware.<br />
<br />
<span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Upcoming Conferences and Events</strong></span></span></p>
<p><strong>Advanced Design &amp; Manufacturing Impact Forum</strong><br />
August 17 to August 20, 2014<br />
Buffalo, NY</p>
<p>The forum will cover topics including design, additive manufacturing, aerospace, life sciences and medical devices, computer aided engineering, and robotics. Discussions will focus on emerging technologies, applications, and solutions for competing in the advanced design and manufacturing marketplace.</p>
<p>Will Schroeder will speak about &ldquo;Open Approaches to Technical Innovation&rdquo; during the Imaging for Diagnostics &amp; Intervention session on Tuesday, August 19, 2014.</p>
<p>For more information on the conference and presentation, please visit https://www.asme.org/events/advanced-design-manufacturing-impact-forum.</p>
<p><strong>ICPR 2014</strong><br />
August 24 to August 28, 2014<br />
Stockholm, Sweden</p>
<p>The 22nd International Conference on Pattern Recognition is hosted by the Swedish Society for Automated Image Analysis (SSBA). The conference&rsquo;s topics of discussion will include recent advances in pattern recognition, as well as in machine learning and computer vision.</p>
<p>&ldquo;Personalized Economy of Images in Social Forums: An Analysis on Supply, Consumption, and Saliency,&rdquo; which was co-authored by Sangmin Oh, Megha Pandey, Ilseo Kim, Anthony Hoogs, and Jeffrey Baumes, will be presented.</p>
<p>For more information on the conference and presentation, please visit http://www.icpr2014.org.</p>
<p><strong>2014 Strategies in Biophotonics</strong><br />
September 9 to September 11, 2014<br />
Boston, MA</p>
<p>The conference will explore issues that are critical to the successful commercialization of biophotonics-based systems. At the conference, examples of current clinical and scientific needs will be discussed, as well as how these needs can be attended to by biomedical optics and photonics.</p>
<p>Stephen Aylward will present the talk &ldquo;Open Science is Impacting Biomedicine!&rdquo; This talk will highlight the growth of open-source software and its impact on biomedicine.</p>
<p>For more information on the conference and presentation, please visit http://www.strategiesinbiophotonics.com/index.html#showcase_3.</p>
<p><strong>MICCAI 2014</strong><br />
September 14 to September 18, 2014<br />
Boston, MA</p>
<p>The 17th International Conference on Medical Image Computing and Computer Assisted Intervention will consist of workshops, tutorials, presentations, exhibitions, and challenges. Attendees of the conference typically include scientists, engineers, and clinicians from a variety of medical imaging and computer-assisted surgery fields.</p>
<p>The paper &quot;Low-Rank to the Rescue - Atlas-based Analyses in the Presence of Pathologies&quot; will be presented as part of the conference. It was co-authored by Xiaoxiao Liu, Marc Niethammer, Roland Kwitt, Matthew McCormick, and Stephen Aylward.</p>
<p>For more information on the conference and presentation, please visit http://www.miccai2014.org.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>New Employees and Interns</strong></span></span></p>
<p><strong>Tim Thirion</strong></p>
<p>Tim joined the Kitware team at the Carrborro, NC, office as an R&amp;D Engineer on the Scientific Computing team. He earned his B.S. in Computer Science from Purdue University, where he minored in physics. He later received an M.S. in Computer Science from the University of North Carolina at Chapel Hill. Prior to joining Kitware, Tim was a Software Engineer at 3D Systems, which is formerly known as Geomagic, Inc.</p>
<p><strong>Matthieu Heitz</strong></p>
<p>Matthieu joined the Kitware team in the Carrboro, NC, office as a one-year R&amp;D intern. He is an IT engineering student at the School of Chemistry Physics Electronics of Lyon and is currently specializing in image processing and algorithms. Matthieu has prior internship experience at Techno Concept and WEG.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Employment Opportunities</strong></span></span></p>
<p>Kitware is seeking talented, motivated, and creative individuals to fill open positions. As one of the fastest growing companies in the country, Kitware has an immediate need for software developers and researchers. In particular, we are looking for scientific visualization developers who have C++ and JavaScript skills, as well as web development skills.</p>
<p>At Kitware, you will work on cutting-edge research alongside experts in the field. Our open source business model means that your impact goes far beyond Kitware, as you become part of worldwide communities that surround our projects.</p>
<p>Kitware employees enjoy a collaborative work environment that empowers them to pursue new opportunities and challenge the status quo with new ideas. In addition to providing an excellent workplace, Kitware offers comprehensive benefits including: flexible hours; a computer hardware budget; health, vision, dental, and life insurance; short- and long-term disability; visa processing; a generous compensation plan; a yearly bonus; and free drinks and snacks.</p>
<p>For more details, please visit our employment site at jobs.kitware.com. Interested applicants are encouraged to submit their resumes and cover letters through our online portal.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Kitware Internships</strong></span></span></p>
<p>Kitware internships provide current college students with the opportunity to gain hands-on experience working with leaders in their fields on cutting-edge problems. Our interns assist in developing foundational research and leading-edge technology across five business areas: scientific visualization, computer vision, medical computing, data and analytics, and quality software process. We offer our interns a challenging work environment and the opportunity to attend advanced software training.</p>
<p>To apply for an internship, please visit our employment site at jobs.kitware.com. Resumes and cover letters can be submitted through our online portal.</p>
Thu, 24 Jul 2014 00:00:00 -0400Kitware Newshttp://www.kitware.com/source/home/post/142
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Quantifying Lung Cancer Markers</strong></span></span></p>
<p>Kitware has announced a $202,762 STTR award from the National Institutes of Health to develop a computational method for longitudinal image analysis.</p>
<p>The project is a collaboration between Kitware and three world-class research institutions: Rochester Institute of Technology (RIT), The University of North Carolina at Chapel Hill (UNC), and University of Pittsburgh. It is co-led by Dr. Nathan Cahill of RIT and Dr. Marc Niethammer of UNC Chapel Hill&#39;s Department of Computer Science.</p>
<p>Over this two-year project, Kitware&#39;s research will focus on the continued development of the geometric metamorphosis algorithm that it pioneered with Dr. Niethammer at UNC. This algorithm facilitates longitudinal image analysis by capturing and quantifying pathology-specific changes regarding disease growth and infiltration, while also compensating for background motion in patient scans taken over time. Most current techniques do not distinguish diseases that spread by displacing healthy tissue from diseases that spread by infiltrating healthy tissue; however, distinguishing those types of changes can be vital to disease diagnosis and treatment monitoring. Furthermore, most current techniques do not de-couple background motion (e.g., respiratory motion) from disease change. Therefore, they provide an imprecise estimate of disease change.</p>
<p>While geometric metamorphosis is applicable to the longitudinal study of nearly any type of focal pathology, this project will focus on one of the most challenging and important clinical tasks: the detection and diagnosis of subtle lung lesions that may be pre-cursor to the development of lung cancer. To pursue this clinical goal, the geometric metamorphosis algorithm will be integrated with more accurate models of lung motion developed by Drs. Nathan Cahill and Maria Helguera at RIT. Additionally, Drs. Kyongtae Ty Bae and David Fetzer, Radiology Faculty in the Department of Radiology at the University of Pittsburgh, will provide clinical guidance and data throughout the development and evaluation of these techniques.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Kitware expands Santa Fe Office</strong></span></span></p>
<p>Kitware&#39;s Santa Fe, New Mexico, office has moved to 1800 Old Pecos Trail, Suite G, Santa Fe, NM 87505. The new office offers a larger work environment with 1600 square feet, three offices, a conference room, a collaboration room, a kitchen, and a real data closet to facilitate a high-speed connection to Kitware&#39;s headquarters in New York.</p>
<p>The move follows the recent addition of two new employees to Kitware&#39;s Santa Fe office and an increase in the number of meetings held with collaborators in the area. The new office location will allow Kitware to grow by an additional five employees in Santa Fe and to host larger collaborative meetings. Having a larger presence in the area will improve Kitware&#39;s ability to offer local training courses on its most popular open-source packages such as the recent CMake course offered in March and the upcoming VTK / ParaView course that will be offered in May.</p>
<p>For more information and to register for the upcoming course, visit http://training.kitware.fr/browse/56.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Neurosurgery Simulation Tool for AVMs</strong></span></span></p>
<p>Kitware has announced $1,932,231 in funding from the National Institutes of Health (NIH) to develop and validate its neurosurgery simulation tool for the treatment of arteriovenour malformations (AVMs). This project is a collaborative effort between Kitware, Rensselaer Polytechnic Institute (RPI), the Department of Computer Science and the Department of Neurosurgery at the University of North Carolina (UNC), Arizona State University (ASU), and Professor Nikos Chrisochoides.</p>
<p>Cerebral AVMs affect millions of people around the world. The surgical resectioning of AVMs is one of the most complex surgeries involving brain vasculature. Due to the risk and complexity of AVM surgery, neurosurgeons need to be highly trained. The use of a realistic and approach-specific simulator will significantly improve the training process by allowing surgeons to have hands-on experiences without jeopardizing the health of patients.</p>
<p>The project&#39;s team has extensive expertise in clinical neurosurgical procedures, computational mechanics, computer graphics, meshing algorithms, human factor studies, and real-time simulation. Dr. Suvranu De from RPI, Dr. Dinesh Manocha from UNC, and Dr. Andinet Enquobahrie from Kitware are co-Principal Investigators for the project.</p>
<p>For the project, the team of collaborators aims to build a clinically-realistic and well-validated neurosurgical simulator that can effectively model vascular structures and non-linear deformations that occur during the surgical treatment of AVMs. The project&#39;s technical development includes anatomical modeling and volumetric meshing of vascular structures. It also involves combining FEM biomechanical modeling with fluid simulation, as well as integrating GPU-based implementations for real-time simulation.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>VTK Accepted in Google Summer of Code 2014</strong></span></span></p>
<p>Kitware is pleased to announce that the Visualization Toolkit (VTK) has been accepted to participate in Google Summer of Code (GSoC) 2014. This is VTK&rsquo;s second acceptance to the program, which fosters student participation in open-source communities.</p>
<p>Out of 371 applications, 190 open-source projects were selected to be a part of GSoC 2014. The program not only gives students the opportunity to work on real-world software projects with mentors in the field, but it introduces the mentoring organizations to talented new developers.</p>
<p>Kitware looks forward to another productive and rewarding experience this summer after the success of GSoC 2011. For VTK&rsquo;s participation in GSoC 2011, Tharindu De Silva&rsquo;s proposal &ldquo;Implement Select Algorithms from IEEE VisWeek 2010 in VTK&rdquo; and David Lonie&rsquo;s proposal &ldquo;Chemistry Visualization&rdquo; were selected.&nbsp; De Silva focused on implementing a selection of the most popular algorithms from IEEE VisWeek 2010 in VTK. Lonie improved support for rendering standard molecule representations. The following year, Lonie was hired as an R&amp;D engineer on Kitware&rsquo;s Scientific Computing team.</p>
<p>More information regarding VTK&rsquo;s participation in the program can be found on http://www.google-melange.com/gsoc/org2/google/gsoc2014/vtk, and example projects are located on the VTK GSoC 2014 Wiki page: http://www.vtk.org/Wiki/VTK/GSoC_2014.</p>
<p><a href="/source/files/4_874281451.jpg" target="_blank"><img src="/source/files/Small.4_874281451.jpg" style="width: 500px; height: 500px; margin-left: 100px; margin-right: 100px;" /></a></p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Kitware Receives Funding to Develop an Open-Source Application using S/TEM</strong></span></span></p>
<p>Kitware has announced a new Department of Energy SBIR Phase I award to develop an open-source platform for materials reconstruction using scanning transmission electron microscopes (S/TEM).</p>
<p>Scanning transmission electron microscopes have advanced the state-of-the-art in the field, facilitating the 3D characterization of materials at the nano and mesoscale. The importance of this type of 3D characterization has extended to a wide class of nanomaterials including hydrogen fuel cells, solar cells, industrial catalysts, new battery materials, and semiconductor devices. While there currently exists a large quantity of capable instrumentation, the rapidly expanding demand for high-resolution tomography is bottlenecked by software that is tailored to lower-dose, biological applications rather than higher-resolution, materials applications.</p>
<p>To address this bottleneck, Kitware will collaborate with Cornell University on the SBIR project &quot;Open-Source Visualization and Analysis Platform for 3D Reconstructions of Materials by Transmission Electron Microscopy.&quot; During Phase I of the project, the team will develop and test a fully functional, freely-distributable, open-source S/TEM package. This package will incorporate a modern user interface that enables the alignment and reconstruction of raw tomography data. The S/TEM package will also provide advanced 3D visualization and analysis that is specifically optimized for materials applications.</p>
<p>Dr. Marcus D. Hanwell, a Technical Leader on Kitware&#39;s Scientific Computing team, will serve as the Principal Investigator for the project.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Kitware Hosts CMake Tutorial</strong></span></span></p>
<p>Kitware hosted an on-site training course titled &quot;Project Lifecycle Management with the CMake Family of Tools&quot; in Santa Fe, NM, on March 4, 2014. Through a set of tutorials and exercises, the course provided developers with an in-depth examination of how CMake works and how it can be used to efficiently write scripts for small to larger projects.</p>
<p>Objectives of the course included understanding the basics of CMake, learning how to configure simple and complex projects, becoming familiar with CMake&#39;s new advanced features, and integrating CMake with CPack, CTest, and CDash.</p>
<p><a href="/source/files/4_1497982902.jpg" target="_blank"><img src="/source/files/Small.4_1497982902.jpg" style="width: 500px; height: 373px; margin-left: 100px; margin-right: 100px;" /></a></p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>New Computer Vision Webinars</strong></span></span></p>
<p>Kitware hosted two live webinars in January. The first webinar, &quot;A Minimum Error Vanishing Point Algorithm and Its Applications,&quot; was based on the paper &quot;A Minimum Error Vanishing Point Detection Approach for Uncalibrated Monocular Images of Man-made Environments&quot; by Yiliang Xu, Sangmin Oh, and Anthony Hoogs. The webinar outlined the vanishing point algorithm, which was published in CVPR 2013, as well as how it relates to camera calibration, surveillance, robot navigation, and scene understanding.&nbsp; &nbsp;</p>
<p>The second webinar, &quot;Building Large-scale Multimedia Search Engines,&quot; was based on the paper &quot;Multimedia Event Detection with Multimodal Feature Fusion and Temporal Concept Localization&quot; by Sangmin Oh, Scott McCloskey, Ilseo Kim, Arash Vahdat, Kevin J. Cannons, Hossein Hajimirsadeghi, Greg Mori, A.G. Amitha Perera, Megha Pandey, and Jason J. Corso. It showcased work designed to help users find videos of queried events such as flash mobs.&nbsp; &nbsp;</p>
<p><a href="/source/files/4_1392017759.jpg" target="_blank"><img src="/source/files/Small.4_1392017759.jpg" style="width: 299px; height: 299px; margin-left: 200px; margin-right: 200px;" /></a></p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Locating anatomical structures in ultrasound images</strong></span></span></p>
<p>In a research effort funded by the National Institutes of Health, Kitware teamed with InnerOptic to develop a new approach to help novice users analyze ultrasound images in order to locate specific anatomical structures. The team&#39;s approach is based on developing a set of template video sequences that depict particular anatomical structures captured by an expert user. During an examination, the video sequence created by the novice operator is continually compared to these templates in order to identify the target anatomical structures.</p>
<p>One challenge faced by the research team during this effort was access to sample clinical ultrasound video sequences. Although ultrasound images are readily available, the full video sequences are rarely retained. Even if this data did exist, these video sequences would have been captured by an expert operator and, therefore, would not accurately represent a video sequence captured by an inexperienced user. Accordingly, the team created a new, annotated database of ultrasound videos, which were acquired on three different phantom datasets. This ultrasound video database was designed to mimic the challenges of clinical data without having to request that clinicians alter their existing protocol in order to collect this data from patients. This video database has been made publically available in order to help researchers evaluate and compare similar recognition approaches.</p>
<p>The team&#39;s research was recently published in the October issue of Medical Image Analysis. In addition, the team has released the detection algorithm as part of the open-source software package TubeTK.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Pathology Visions 2014 Best Poster</strong></span></span></p>
<p>At the 2013 Pathology Visions conference in San Antonio, Texas, Kitware&#39;s digital pathology work was recognized in Sharon E. Fox&#39;s poster &quot;Remote Eye-Tracking for Quantitative Assessment of Whole Slide Image Viewing.&quot; Dr. Fox&#39;s poster, which was co-authored by Dr. Charles Law at Kitware Inc. and Dr. Beverly E. Faulkner-Jones at Beth Israel Deaconess Medical Center, won Best Poster by a Resident. Kitware&#39;s digital pathology work, led by Dr. Law, was leveraged for the client-server digital pathology system and supported fast web-based viewing over standard networks.</p>
<p>The poster details how advanced digital image software can be used to aid pathologists. For the research, Dr. Fox used a Tobii T-120 eye tracker and a client-server digital pathology system to analyze subconscious patterns of gaze and attention as pathologists viewed digital whole slide images (WSI). The patterns help researchers understand how pathologists interact with WSI software, including various interface options, as part of the diagnostic decision making process.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Kitware Attends Inria Industry meeting</strong></span></span></p>
<p>Members of Kitware SAS presented Kitware&#39;s expertise in visualization, data processing, and modeling at the February 11, 2014, Inria Industry Meeting. Inria (the French Institute for Research in Computer Science and Automation) and Lyonbiop&ocirc;le held the meeting in partnership in Lyon, France. The meeting highlighted the relationships between data modeling, analysis, and management in the development of health products.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Kitware Gives Back to Ronald McDonald House Charities</strong></span></span></p>
<p>Kitware&#39;s Clifton Park, NY, and Carborro, NC, offices held a fundraiser in February. The fundraiser continued Kitware&#39;s tradition of supporting the needs of its communities, as all of the proceeds were donated to Ronald McDonald House Charities (RMHC).</p>
<p>The fundraiser consisted of selling baked goods and HEARTS to support the RMHC program &quot;Help With All Your Heart.&quot; RMHC helps seriously ill children and their families through programs, grants, and scholarships, as well as Ronald McDonald Houses, Family Rooms, and Care Mobiles. In total, Kitware raised over $500 for the charity.</p>
<p><span style="color:#000080;"><span style="font-size: 14px;"><strong>Awards and Promotions</strong></span></span></p>
<p>Rusty Blue received an award for his 10 years of service at Kitware. Dr. Blue joined Kitware in January 2004 as an R&amp;D Engineer with expertise in visualization and haptics. Dr. Blue has continued his contributions to the field of haptics at Kitware. His work has also involved 3D structured light systems for reconstructing 3D environments. In 2011, Dr. Blue became a Technical Leader on the Computer Vision team.</p>
<p>Utkarsh Ayachit has been promoted to the position of Distinguished Engineer. Mr. Ayachit drives a number of<br />
projects including SBIRs, grants, and commercial projects, and he is the lead developer and caretaker of ParaView. Mr. Ayachit is also the original developer of ParaViewWeb.</p>
<p>Bob O&#39;Bara has been promoted to Assistant Director. Mr. O&#39;Bara is leading one of the fastest growing areas at Kitware: pre-processing, simulation preparation, and modeling. He is also well known for his customer interaction skills.</p>
<p>Aashish Chaudhary has been promoted to Technical Leader. Mr. Chaudhary has demonstrated great business development and project management skills. He leads Kitware&#39;s climate data analysis and visualization efforts.</p>
<p>From the Business Development Team, Casey Goodlett moves into a Technical Leadership position. Dr. Goodlett has made many valuable additions to Kitware over the last several years including contributing to and, in many cases, leading important commercial customer relationships.</p>
<p>Joachim Pouderoux, at Kitware SAS in Lyon, France, is now a Technical Expert. Dr. Pouderoux has shown significant expertise in scientific visualization, specifically in meshing and rendering. He has also driven important customer relationships.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>New Employees</strong></span></span></p>
<p>Jonathan Beezley joined the Kitware team in the Clifton Park, NY, office as a Scientific Computing R&amp;D Engineer. He earned his B.S. in Physics from the University of Nebraska at Lincoln in 2001. He later received his Ph.D. in Applied Mathematics from the University of Colorado at Denver. Prior to joining Kitware, Jonathan worked as a Post-Doc at ERFACS and M&eacute;t&eacute;o-France, studying covariance modeling of deformed stationary fields.<br />
<br />
Heather James joined the Kitware team in the Clifton Park, NY, office as the Business Development Manager. She received her B.S. in Marine Science from Texas A&amp;M University. Before becoming a part of the Kitware team, Heather served as the Manager of Technology and Innovative Applications at UTC Aerospace Systems. She has also worked for Shafer Corporation/Air Force Research Laboratory, as well as Science and Technology International/BAE Systems Spectral Solutions.</p>
<p><span style="font-size:14px;"><span style="color: rgb(0, 0, 128);"><strong>Employment Opportunities</strong></span></span></p>
<p>Kitware is seeking talented and motivated individuals to fill open positions. Interested applicants are encouraged to visit our employment site at jobs.kitware.com and submit a resume and cover letter through our online portal.</p>
Thu, 17 Apr 2014 00:00:00 -0400Insight Toolkit Plug-Ins: VolView and V3Dhttp://www.kitware.com/source/home/post/19
<p>VolView and V3D are applications for visualization and analysis of three-dimensional images. They both have tools which allow users to filter and analyze image data. The two applications serve two different niches: VolView was created with radiologists in mind, while V3D caters primarily to microscopists. However, a powerful part of both biomedical imaging tools is support for user defined extensions via custom plug-ins. This support allows users to extend how input data is filtered. This quick guide will help you get started with your own VolView and/or V3D plug-in.</p> <p>Software applications such as Slicer, SNAP, Analyze, VolView, SCIRun, GoFigure and V3D use ITK filters as plug-ins to add desirable additional capability to their image analysis application of choice, thus removing the need to rewrite existing algorithms for each new piece of software while stamping out the hassles of requesting usage-permission.</p> <p>ITK&rsquo;s qualifications for use in scientific research makes it important for developers to make the most of ITK&rsquo;s imaging tools and offer tailored combinations to those who desire them. The following table compares some of the main features of VolView [1] and V3D [2].</p> <p style="text-align: center;"><img style="border: 0pt none;" src="/source/files/3_1333118702_png" alt="" width="400" /></p> <p><strong>Structure of Plug-ins</strong><br />Typically, a plug-in for V3D and VolView consists of source code compiled and packaged as a shared library. The plug-in name determines the name of the shared library used for deployment as well as the name of the plug-in initialization function. The shared library is copied into a specific directory of the VolView or V3D binary installation. No libraries from VolView or V3D are required in the process. Plug-in developers only need a set of source code headers defining the plug-in API offered by the application. This is essentially the set of data structures and function calls by which the plug-in communicates with the application.</p> <p><strong>A V3D PLUG-IN</strong><br />The development of ITK plug-ins for V3D serves two purposes: 1) exposing ITK functionalities to researches who analyze microscopy data and 2) uncovering the areas in which ITK requires improvements in order to better serve the microscopy community. ITK filter plug-ins were added to V3D via a collaborative effort between Kitware and Janelia Farm (HHMI).</p> <p>The simplest way to implement a plug-in is to copy and paste an existing V3D plug-in and modify two methods. More advanced plug-ins, typically those requiring more than one filter, may need to be modified further. For our V3D plug-in example, we will use the existing itkSigmoidImageFilter, which can be found under the Intensity Transformation directory to create another plug-in, itkLogImageFilter. For V3D, the V3DPluginCallback is used to get data structures and callbacks.</p> <p><strong>Create a V3D Plug-in</strong></p> <p>Copy and paste existing plug-in header and source files to the binary directory where plug-ins are set up in your system. An example path is: src/ITK-V3D-Plugins/Source/IntensityTransformations. Change the file names to correspond to the goal image filter.</p> <p style="text-align: center;"><img style="border: 0pt none;" src="/source/files/3_1519728414_png" alt="" width="400" /></p> <p>&nbsp;</p> <p>Then find instances in files where filter references and names ought to be replaced. In the SetupParameters section, adjust your filter&rsquo;s parameters, or if the section does not exist, refer to the SetUpParameters() below. If you're uncertain about a default value use your best judgment or communicate with&nbsp; an appropriate end-user for a general value. The value chosen should reflect some noticeable changes in an image upon testing.</p> <table style="background-color: #d2d2d0; width: 600px; height: 22px;" border="0"><tbody><tr><td>&nbsp; void Execute<br />&nbsp;&nbsp; (const QString &amp;menu_name, Qwidget 50&nbsp; *parent)<br />&nbsp;&nbsp; {<br />&nbsp;&nbsp; this-&gt;Compute();<br />&nbsp;&nbsp; }<br />&nbsp; <br />&nbsp; virtual void ComputeOneRegion()<br />&nbsp;&nbsp;&nbsp; {<br /><br />&nbsp;&nbsp;&nbsp;&nbsp; this-&gt;m_Filter-&gt;SetInput<br />&nbsp;&nbsp;&nbsp;&nbsp; ( this-&gt;GetInput3DImage() );<br /><br />&nbsp;&nbsp;&nbsp;&nbsp; if( !this-&gt;ShouldGenerateNewWindow() )<br />&nbsp;&nbsp;&nbsp;&nbsp; {<br />&nbsp;&nbsp; &nbsp;&nbsp; }<br /><br />&nbsp;&nbsp;&nbsp;&nbsp; this-&gt;m_Filter-&gt;Update();<br />&nbsp;&nbsp;&nbsp; }<br /><br />&nbsp; virtual void SetupParameters()<br />&nbsp;&nbsp;&nbsp; {<br />&nbsp;&nbsp;&nbsp;&nbsp; //<br />&nbsp;&nbsp;&nbsp; &nbsp;&nbsp; // These values should actually be provided by <br />&nbsp;&nbsp; &nbsp;&nbsp; // the Qt Dialog...<br />&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; // just search the respective .h file for the <br />&nbsp; &nbsp;&nbsp; &nbsp;&nbsp; // itkSetMacro for parameters<br />&nbsp;&nbsp;&nbsp;&nbsp; this-&gt;m_Filter-&gt;SetFullyConnected( true );<br />&nbsp;&nbsp;&nbsp;&nbsp; this-&gt;m_Filter-&gt;SetBackgroundValue( 0 );<br />&nbsp;&nbsp;&nbsp;&nbsp; this-&gt;m_Filter-&gt;SetForegroundValue( 100);<br />&nbsp;&nbsp;&nbsp;&nbsp; this-&gt;m_Filter-&gt;SetNumberOfObjects( 3 );<br />&nbsp;&nbsp;&nbsp;&nbsp; this-&gt;m_Filter-&gt;SetReverseOrdering( false );<br />&nbsp;&nbsp;&nbsp;&nbsp; this-&gt;m_Filter-&gt;SetAttribute( 0 );<br />&nbsp;&nbsp;&nbsp; }<br /></td></tr></tbody></table> <p><strong>A VOLVIEW PLUG-IN</strong><br />For this example, a plug-in named vvITKGradientMagnitude will be deployed in a shared library: libvvITKGradientMagnitude.so in Unix/OsX and vITKGradientMagnitude.dll on MS Windows. The initialization function is vvITKGradientMagnitudeInit(). The result of the example will be an implementation of a simple ITK based filter with only one GUI parameter. The example may be adapted to most other toolkits or C/C++ implementation.</p> <p>Given the similar structure of V3D, the directions in this VolView example should be generic enough to be applied to a V3D plug-in with the respective style/naming differences.</p> <p>Communication between the plug-in and the application is facilitated by a public header file that defines the data and GUI structures. The plug-in developer simply implements the methods that are defined within the header file.</p> <p><strong>Initialization function</strong><br />A plug-in&rsquo;s initialization function must conform to a particular API. For our particular example, this would be:</p> <table style="background-color: #d2d2d0; width: 600px; height: 118px;" border="0"><tbody><tr><td>extern "C" <br />{<br />&nbsp;&nbsp; void VV_PLUGIN_EXPORT vvITKGradientMagnitudeInit(vtkVVPluginInfo *info) &nbsp;<br />&nbsp;&nbsp; {<br />&nbsp;&nbsp; }<br />}</td></tr></tbody></table> <p>where the symbol VV_PLUGIN_EXPORT and the structure vtkVVPluginInfo are defined in the public header file, vtkVVPluginAPI.h. This initialization function will be invoked by VolView at start-up -- after the shared library has been dynamically loaded.<br /><br /><strong>Content</strong><br />Below is the typical content of the public header file, vtkVVPluginAPI.h.</p> <p>Call macro vvPluginVersionCheck() to verify the plug-in/API conforms to current version of VolView's binary distribution. A plug-in cannot be executed if the versions do not match, and VolView displays an error message at run-time to indicate this when necessary.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>vvPluginVersionCheck();</td></tr></tbody></table> <p>Information Structure is initialized. Setup Information does not change.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>// Setup Information</td></tr></tbody></table> <p>ProcessData is set to the pointer of the function that will perform the computation on the input data. This allows for freedom in the implementation of the function. This is further covered in the next section.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>info-&gt;ProcessData = ProcessData;</td></tr></tbody></table> <p>Similarly, UpdateGUI is also set to a function pointer.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>info-&gt;UpdateGUI = UpdateGUI;</td></tr></tbody></table> <p>SetProperty() is used to define general properties of the plug-in &ndash; some of these properties are simply informative text that is displayed on the GUI (i.e. textual name of the plug-in, terse and extended documentation). Properties are identified by tags to further enforce the decoupling between the internal representation of information in VolView and the structure of code in the plug-in. Other non-GUI properties are also set with this method.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>//Setup Information - SetProperty()</td></tr></tbody></table> <p>The tag VVP_NAME specifies that the string being passed as third argument of the SetProperty() method should be used for the text label of the plug-in in the GUI.&nbsp; VVP_GROUP specifies the grouping of the filter within the plug-in menu, and VVP_TERSE_DOCUMENTATION provides a short description of the plug-in.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td><p>info-&gt;SetProperty( info, VVP_NAME, "Gradient<br />Magnitude IIR (ITK)");</p> <p>&nbsp;</p> <p>info-&gt;SetProperty( info, VVP_GROUP, "Utility");</p> <p>&nbsp;</p> <p>info-&gt;SetProperty( info, VVP_TERSE_DOCUMENTATION,<br />&nbsp; "Gradient Magnitude Gaussian<br />&nbsp; IIR");</p></td></tr></tbody></table> <p>The tag VVP_FULL_DOCUMENTATION specifies the complete description string.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td><p>info-&gt;SetProperty( info, VVP_FULL_DOCUMENTATION,<br />"This filter applies IIR filters to compute the equivalent of convolving the input image with the erivatives of a Gaussian kernel and then computing the magnitude of the resulting gradient.");</p></td></tr></tbody></table> <p>Other tags are used to specify:</p> <p>Whether this filter is can perform in-place processing;</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>info-&gt;SetProperty( info, <br />&nbsp; VVP_SUPPORTS_IN_PLACE_PROCESSING, "0");</td></tr></tbody></table> <p>Whether this filter supports data streaming (processing in chunks);</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>info-&gt;SetProperty( info, <br />&nbsp; VVP_SUPPORTS_PROCESSING_PIECES, "0");</td></tr></tbody></table> <p>And other information about the filter implementation.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td><p>info-&gt;SetProperty( info, VVP_NUMBER_OF_GUI_ITEMS,<br />&nbsp; "1");</p> <p>&nbsp;</p> <p>info-&gt;SetProperty( info, VVP_REQUIRED_Z_OVERLAP,<br />&nbsp; "0");</p></td></tr></tbody></table> <p>Memory consumption is an important consideration for processing. By providing an estimated number of bytes to be used per voxel of the input dataset:</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td><p>info-&gt; SetProperty( info, <br />&nbsp; VVP_PER_VOXEL_MEMORY_REQUIRED, "8");</p></td></tr></tbody></table> <p>Memory consumption can be estimated. VolView will use this factor to ensure that the system has enough memory for completing the plug-in processing and for determining if the undo information can be kept. Note that this estimate is not based on the size of the final dataset produced as output, but on the total amount of memory required for intermediate processing. In other words, it should provide the peak of memory consumption during the plug-in execution.</p> <p><strong>The ProcessData() Function<br /></strong>The ProcessData() function performs the filter computation on the data. The function signature of ProcessData() is:</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>static int ProcessData( void *inf, <br />&nbsp; vtkVVProcessDataStruct *pds)</td></tr></tbody></table> <p>where the first argument is a pointer to a vtkVVPluginInfo structure which can be downcast to a vtkVVPluginInfo pointer using:</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>vtkVVPluginInfo *info = (vtkVVPluginInfo *) inf;</td></tr></tbody></table> <p>In this assignment, the right hand side is a structure, vtkVVProcessDataStruct, that carries information on the data set to be processed. This information includes: the actual buffer of voxel data, the number of voxels along each dimension in space, the voxel spacing and the voxel type.</p> <p>The vtkVVProcessDataStruct also contains the members inData and outData, which are pointers to input and output data sets, respectively. ProcessData() extracts the data from the inData pointer, processes it, and stores the final results in the outData buffer.</p> <p><strong>ProcessData() Starting Code <br /></strong>The typical ProcessData() starting code of this function extracts meta information about the data set from the vtkVVProcessDataStruct and vtkVVPluginInfo structures.&nbsp; For example, the following code shows how to extract the dimensions and spacing of the data.</p> <p>First, set up a data structure.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td><p>SizeType size;<br />IndexType start;<br />double origin[3];<br />double spacing[3];</p> <p>&nbsp;</p> <p>size[0] = info-&gt;InputVolumeDimensions[0];<br />size[1] = info-&gt;InputVolumeDimensions[1];<br />size[2] = pds-&gt;NumberOfSlicesToProcess;</p> <p>&nbsp;</p> <p>for( unsigned int i=0; i&lt;3; i++ )<br />&nbsp; {<br />&nbsp; origin[i] = info-&gt;InputVolumeOrigin[i];<br />&nbsp; spacing[i] = info-&gt;InputVolumeSpacing[i];<br />&nbsp; start[i] = 0;<br />&nbsp; }</p></td></tr></tbody></table> <p>Image data can be imported into an ITK image using the itkImportImageFilter.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td><p>RegionType region;<br />region.SetIndex( start );<br />region.SetSize( size );<br />m_ImportFilter-&gt;SetSpacing( spacing );<br />m_ImportFilter-&gt;SetOrigin( origin );<br />m_ImportFilter-&gt;SetRegion( region );<br />m_ImportFilter-&gt;SetImportPointer( pds-&gt;inData, <br />&nbsp; totalNumberOfPixels, false );</p></td></tr></tbody></table> <p>The output of the import filter is then connected as the input of the ITK data pipeline and the pipeline execution is triggered by calling Update() on the last filter.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>m_FilterA-&gt;SetInput( m_ImportFilter-&gt;GetOutput() );<br />m_FilterB-&gt;SetInput( m_FilterA-&gt;GetOutput() );<br />m_FilterC-&gt;SetInput( m_FilterB-&gt;GetOutput() );<br />m_FilterD-&gt;SetInput( m_FilterC-&gt;GetOutput() );<br />m_FilterD-&gt;Update();</td></tr></tbody></table> <p>Finally the output data can be copied into the pointer provided by VolView. This is typically done using an ITK image iterator that will visit all the voxels.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>outputImage = m_Filter-&gt;GetOutput();<br />typedef itk::ImageRegionConstIterator<br />&lt; OutputImageType &gt; OutputIteratorType;<br />OutputIteratorType ot( outputImage, <br />&nbsp; outputImage-&gt;GetBufferedRegion() );<br />OutputPixelType * outData = <br />&nbsp; static_cast&lt; OutputPixelType * &gt;( pds-&gt;outData );<br />ot.GoToBegin();<br />while( !ot.IsAtEnd() )<br />&nbsp; {<br />&nbsp; *outData = ot.Get();<br />&nbsp; ++ot;<br />&nbsp; ++outData;<br />&nbsp; }</td></tr></tbody></table> <p>When memory consumption is critical, it is more convenient to actually connect the output memory buffer provided by VolView to the output image of the last filter in the ITK pipeline. This can be done by invoking the following lines of code before executing the pipeline.</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>m_FilterD-&gt;GetOutput()-&gt;SetRegions(region);<br />m_FilterD-&gt;GetOutput()-&gt;GetPixelContainer()<br />-&gt;SetImportPointer(<br />&nbsp; static_cast&lt; OutputPixelType * &gt;( pds-&gt;outData ),<br />&nbsp; totalNumberOfPixels, false);<br />m_Filter-&gt;GetOutput()-&gt;Allocate( );</td></tr></tbody></table> <p>The current distribution of ITK provides support for not rewriting this same code for each new plug-in. A templated class containing this code is available in the current distribution of ITK in the InsightApplications module. New plug-ins only need to define their own ITK pipelines and invoke the methods of the base class in the appropriate order.</p> <p><strong>Refreshing the GUI <br /></strong>After the source code has been packaged into a shared library, it should be deposited into the plug-ins directory: VolView 3.2/bin/Plugins. In order for the plug-in to load, the GUI needs to be refreshed by re-scanning all plug-ins. A circular arrow next to the filters selection menu will refresh the filters list.</p> <p>Image processing algorithms can take considerable time to execute on 3D data sets. It is important to provide user feedback as to how the processing is progressing and to allow the user to cancel an operation if the total execution time is excessively long. Calling the UpdateProgress() function of the vtkVVPluginInfo structure from within the ProcessData() function accomplishes this:</p> <table style="background-color: #d2d2d0; width: 600px;" border="0"><tbody><tr><td>float progress = 0.5; // 50% progress<br />info-&gt;UpdateProgress( info, progress, <br />&nbsp; "half data set processed");</td></tr></tbody></table> <p>This function provides feedback to the VolView GUI allowing VolView to update the progress bar and set the status bar message. The frequency with which the UpdateFunction should be called should be well balanced. If it is invoked too often, it will negatively impact the performance of the plug-in -- a considerable amount of time will be spent in GUI refreshing. If it is not called often enough, it may produce the impression that the processing is failing and that the application is no longer responding to user commands.</p> <p>A detailed skeleton plug-in and a more contextual version of this guide can be found starting on page 45 in the VolView Users Manual, available at kitware.com/volview.</p> <p><strong>REFERENCES</strong><br />[1] Download VolView from: kitware.com/volview<br />[2] Download V3D from: <a href="http://penglab.janelia.org/proj/v3d">http://penglab.janelia.org/proj/v3d</a></p> <p><em><strong> <p><img style="float: left;" src="/source/files/3_405399245_png" alt="" width="75" /></p> </strong> <p><strong>Sophie Chen</strong> recently completed her second summer as a Kitware intern where she worked under Luis Ib&aacute;&ntilde;ez, Wes Turner and Harvey Cline on programming algorithms, ITK and VTK. Sophie is a senior at RPI where she is working toward an IT degree in Managing Information Systems.</p> <strong> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> </strong></em></p> <p>&nbsp;</p> <p>&nbsp;</p> <p><em><strong>&nbsp;</strong></em></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p>Fri, 15 Oct 2010 00:00:00 -0400