Research and Development Center PJAIT in Bytom
Introduction
The Polish-Japanese Academy of Information Technology is opening up to the scientific community by offering interactive access from anywhere in the country to the Research and Development Center in Bytom, (CBR PJAIT) - Poland's first set of advanced laboratories for analyzing and synthesizing traffic in the shareconomy model.
In order to enlarge the set of units collaborating on the development of new solutions, expand the scope of ongoing research and increase its novelty, the CBR PJAIT provides access in a shared mode:
1) Unique specialized laboratories for human motion acquisition and analysis,
2) IT infrastructure for processing and collecting large amounts of data,
3) Access to unique data collected from past research.
Drawing on global experience, CBR PJAIT proposes to collaborate and share experience, hardware and software resources enabling innovative and interdisciplinary research in human motion acquisition, analysis and synthesis, dissemination of results in world-renowned journals and development work leading to the introduction of innovative products to the market.
Our Laboratories
Multi-modalHuman Motion Lab (HML)
HML enables the acquisition of motion data by simultaneously and synchronously measuring and recording motion kinematics, muscle potentials of ground force responses, and video streams.

Figure 1: Example visualization of multimodal data acquired in the HML laboratory
Apparatus
10 Vicon MX-T40 NIR cameras with the following specifications: resolution: 4 MP (2352 x 1728 px) 10-bit grayscale. The system allows the capture of up to 370 frames per second at full resolution (4 MP), for higher sampling rates the resolution is reduced. An acquisition rate of 100 frames per second is used for standard measurements. The measurement space is in the shape of an ellipsoidal cylinder with a height of 3m and a base with axes of 6.47m, 4.2m Further components of the aparture equipment are: two Noraxon pressure plates, a MyoSystem 1400A 2×8 channel EMG system, 4 HD video cameras (DV Basler Pilot piA 1920-32gc) a treadmill with variable slope, a stabilometer.
Software
Nexus, Blade, Polygon, Body Builder, BioWare, MDE(Motion Data Editor) - a proprietary system developed at PJAIT for viewing and editing motion data.
Areas of cooperation
- Multi-modal motion acquisition of the human figure in the Vicon system: kinematics, dynamics, EMG, GRF, four HD(High Definition) video streams, and LFP(Local Field Potential) scheduled after access to pacemakers from BMI,
- Motion representation: conversions between motion capture formats, marker trajectory filtering, frame correction of selected bones, skeleton determination algorithms, skeleton quality assessment,
- Motion modeling and synthesis: simple and inverse dynamics, Featherston algorithms, implementation and extensions,
- Traffic analysis using traffic descriptors. Motion segmentation. Similarity criteria for different traffic representations (DTW-DynamicTime Warping and its modifications), dimensionality reduction, variety discovery for different types of traffic (dimensionality, mapping) clustering and classification of traffic data,
- Studying movement as a personal trait: determining personal descriptors,
- Planning and supervision of rehabilitation and its optimization,
- Design and optimization of prostheses and orthopedic implants,
- Diagnosis of orthopedic conditions, tracking and evaluating the effects of treatment,
- Diagnosis and rehabilitation in Parkinson's disease, correlation of UPDRS scale with movement descriptors, evaluation of the effects of drug, stimulation and tuning of stimulation parameters based on movement descriptors.
Human Facial Modelling Lab (HFML) andHumanMicroexpression Lab (HMX).
While the problem of retargeting human movement is solved, perceptually reliable retargeting of facial emotions is still an open problem. The face is not a rigid solid, and its facial expressions are one of the main channels of communication between people. The acquisition of facial expressions using markers is simple yet cumbersome for the actor. One of the specific facial expressions are the so-called microexpressions occurring in cases of conscious lying.

Fig.2 From left, (i) two frames from the recording of facial microexpressions in the case of lying, and (ii) four face grids reconstructed from marker system data.

Fig. 3. Facial expression acquisition systems (marker and markerless) From left (i) Bonita 10 camera system (red arrows) and 6 hardware synchronized video cameras (blue arrows), (ii) example Bonita 10 camera and video camera.
Apparatus
10 Vicon Bonita 10 NIR cameras with the following specifications: resolution: 1 MP, sampling rate up to 250 fps, 10-bit grayscale, 6 Point Grey Grasshopper 3 cameras with 1920×1200 resolution, 162 fps acquisition speed with global shutter and precision lenses with 25 mm focal length and 1.4-16 brightness. Three USB 3.0 controllers, with two 5 Gbps external ports. Programmable synchronizer with autorun, trigrun, syncrun, gaterun, softrun modes. Control computer with 2x5TB disk.
Software
Nexus, Blade, Polygon, Meshlab. Implemented two technologies for transferring the actor's facial expressions extracted from video to a 3D neutral facial mesh. The first technology uses Bonita's marker system, the second uses images from 6 synchronized video cameras.
Areas of cooperation
- Entertainment and serious video game industry,
- Facial biometrics, lie detection based on microexpressions
- Screening for genetic defects
Human Seeing Lab (HSL) for Motion Analysis.
The HSL Laboratory is focused on developing, testing and creating new tools for IVA(Intelligent Video Analytics). The basis for the HSL Laboratory's operation is access to HD video streams from PTZ cameras, databases, and equipment that allows online processing of video data.

Figure 4: Example screens of the behavior recognition system. From left, (i) classification developed by the system, (ii) visualization of the trajectory of movement of points determined on the edge of the silhouette.
Apparatus
9 Axis Q6045-E Mk II PTZ CCTV cameras, including 7 in Gliwice, 2 in CBR Bytom. The cameras have the following parameters: full HD 1080p resolution, MJPEG 25FPS streaming format, 32x optical zoom, 1/33,000s to 1/3s shutter, ONVIF compliance, access via HTTP, HTTPS, SSL/TLS, FTP, CIFS/SMB, SNMP, SSH, RTCP, SFTP protocols. 14 dual-processor workstations for testing and implementation of algorithms and system. Disk array and video data backup server.
Software
Prototype-level implementations of i) AR(Action Recognition) using high-level motion representation of action banks, poselets and their mixtures, velocity history of tracked points, motion segmentation, Hough transform, articulated models, gradient histograms, methods based on topological manifolds, (ii) tracking of people and objects TMO(Tracking Multiple Objects), based on tracking of key points and classification of their trajectories, deformable models, energy minimization, tracklet confidence, discrimative appearance, graph methods, (iii) recognition and tracking of poses of people PT(Pose Tracking) including 3D pose estimation.
Areas of cooperation
- Identification of people's behavior and detection of dangerous situations,
- Predicting intentions from multi-camera video images using group behavior models,
- Creating evidence from motion analysis results based on social psychology, biometric techniques and 3D character inference from video,
- Analysis of the probability of a threat based on information about individuals and groups moving within the field of view of multiple cameras,
- Identification of people's behavior and detection of dangerous situations,
- Predicting intentions from multi-camera video images using group behavior models,
Wearable Technology Lab (WTL) forwearable systems.
The ability to acquire motion parameters and human vital signs more broadly is one of the rapidly growing areas of research and implementation.

Fig.5. Three versions of the motion acquisition costume with modules and cables integrated into the garment.
Apparatus
Proprietary costume based on wired CAN bus connected IMU modules. The costume has a customizable, strap-on and integrated clothing version. Proprietary prototypes of ZigBee-based wireless WBAN modules have been developed.
Software

Fig.6. From left, (i) the configuration screen of the software supporting the outfit during the user's anthropometric data entry phase, (ii) the data processing pipeline configuration screen.
The IMU costume supports the MSAR software with an expandable modular structure designed for online traffic data acquisition, visualization and analysis as well as collection of traffic data from mobile traffic acquisition systems. The MSAR software consists of four modules:
- Library for communication with the MSAR system,
- MSAR system sensor streaming data handling module,
- Backbone data visualization support module,
- A module that introduces support for MSAR system functionality into the application enabling integration of MSAR data with other modules that process traffic data.
Areas of cooperation
- All areas of sports where there is a need for motion acquisition and analysis,
- Long-term remote motor control of Parkinson's disease patients in their natural environment,
- Surveillance of the elderly, detection of falls, slips,
- Rehabilitation and telerehabilitation.
Human Dynamics and Multimodal Interaction Lab (HDMI)
CAREN Extended(Computer Assisted RehabilitationENvironment) is an immersive virtual reality system designed for research and rehabilitation. The system allows research work in the fields of neurology, rehabilitation, orthopedics, sports, entertainment in both direct and remote access modes.

Figure 7: CAREN Extended laboratory as seen from the operator's station. From left, (i) view before launching the virtual environment, (ii) view with visualization of traffic along a city street.
Apparatus
The core of the HDMI laboratory's apparatus equipment is the CAREN Extended system, the first in Poland and one of 30 in operation worldwide, which is a qualitative extension of the HML laboratory, involving the introduction of multi-modal feedback implemented using virtual reality techniques. CAREN Extended features an interactive two-lane treadmill with a maximum speed of 5 m/s. Sensors measuring the force of the foot on the ground (GRF system) are distributed along the entire length of each treadmill lane. The treadmill is placed on a platform with 6 degrees of freedom. The platform's kinematics are programmable and allow the platform's linear and angular velocities and accelerations to be set as a function of time. The CAREN Extended environment has an advanced vision system and an omnidirectional sound system. The image is displayed on a semi-spherical screen placed in front of the treadmill.
Wireless measurement of EMG muscle potentials is implemented in the system. It is possible to reconfigure the system by adding an EEG system and a DBS system with LFP potential measurement. All quantities measured in the CAREN Extended environment are hardware synchronized. Magnitudes measured in the system as well as estimated using the software are continuously displayed on virtual screens, and can be saved in an editable graphic file. In addition, the person's behavior on the treadmill is recorded by 3 cameras.
Software
Nexus, Blade, Polygon, Body Builder, D-Flow.
Areas of cooperation
- Investigating strategies to stabilize a pose precipitated from its equilibrium point by short-term changes in belt speed, or changes in platform orientation,
- Study of gait and running motor skills in case of interference,
- Investigating the role of visual (screen image), sensory (6DOF platform) and acoustic information on balance, gait and running strategies,
- Investigating the role of vision information (image on screen) on grasping strategy (eye-hand synchronization),
- Developing new rehabilitation strategies using biofeedback, including for a case of Parkinson's disease(rythmic cues),
- Design and optimization of prostheses and orthopedic implants,
- Diagnosis of orthopedic conditions, tracking and evaluating the effects of treatment.
Databases
A full HD p1080, 18-27 FPS MJPEG video database with a multi-layered expandable index describing motion for behavior pattern recognition. Due to its size, it is a unique database in the world - the currently shared video collections have a maximum of several GB, low-resolution video data with motion recordings and camera calibration data. The database contains 10000 hours of footage, more than one billion three hundred million frames, 7.5 million recorded events, 53,000 of which are manually annotated, 150 types of events.
Database of Assets of Realistic Animation for computer games (mocap + rigged meshes)
Multimodal Database of motion, gait, exercise measurements of healthy subjects and patients: mocap (100 fps), 4 video streams (25 fps, 1920×1080), EMG, GRF.
Multimodal Database of selected motor tasks of Parkinson's disease patients: 4 video streams (25 fps, 1920×1080), motion capture (100 fps), EMG(EMG LB 3), GRF.
Database of segmented annotated hand ultrasound images with power doppler.
IT infrastructure
Computing: 23 nodes, 20 x Tesla K80, 2 x NVidia K2, 664 core CPU, 7TB RAM.
Memory: 1.2 PB, dynamic tiering, architecture for sequential and random operation, 15 MB/s (read), media: SSD (3TB), SAS (184TB), NL-SAS (1092TB)
Software: RHEV virtualization, SLURM, GLUT, MATLAB (server), OS Win, ubuntu, MSVS, GCC.

CBR-PJAIT as an ecosystem of state-of-the-art traffic acquisition, analysis and synthesis laboratories in a shareconomy model
Working with a number of institutions and companies, the CBR-PJAIT noted the need for access to resources, advanced technical infrastructure and know-how. The infrastructure and resources owned by PJAIT will be made available to other entities in Poland and Europe on a non-profit and commercial basis using the largest network infrastructure in the country - PIONIER and GeaNT. PIONIER, through a federation model - a unified user authorization mechanism - offering access to all scientific and medical entities in the country using this network, which simplifies the management of external users.

Fig. 8. General structure of CBR resources PJAIT and the concept of making them available.
The PJAIT CBR has also created a computing ecosystem within which the Infrastructure can be shared: virtual environments, computing servers, data servers, laboratories, and owned resources. The primary uses of the computing ecosystem include: data collection, computational modeling, image processing, resource sharing, integration with other computing centers, and information sharing.

Fig.9. Possibilities of using the IT infrastructure held at CBR PJAIT.
Security System
As a security system to meet the requirements of the Law on Personal Data Protection and other resulting regulations, the CBR PJAIT has implemented a public key infrastructure, the elements of which include certification authorities, smart card personalization stations with printers. As a result, logging into the systems is done using smart cards with valid personal certificates.

Fig.10. Security system concept.
Teleconferencing and unified communications system
The video conferencing and unified communications system was developed for interactive sharing of laboratories, resources and data. The system includes hardware video terminals located in laboratories and unified communication servers to increase the efficiency of remote communication and collaboration. The system enables voice and video calls, high-quality voice conferencing, multi-party calls and requires only a web browser for users located outside the CBR PJAIT .
