THANK YOU FOR SUBSCRIBING
Immerz transforms how people experience digital content by embedding high-fidelity haptics into audio. Its neuroscience-based approach lets users physically feel moments in games, music, and media, creating deeper emotional and sensory engagement.
HaptX develops advanced wearable technology that provides the most realistic touch simulation in virtual reality and robotics. Its HaptX Gloves G1 enhance workforce training by offering precise tactile feedback, enabling skill transfer in complex tasks like surgery, manufacturing, and equipment maintenance.
Immersion Corporation specializes in haptic technology, enhancing digital experiences through touch feedback. With a vast portfolio and a presence in over 3 billion devices globally, it delivers innovative touch solutions across industries, enriching consumer products like gaming consoles, smartphones, and automobiles.
Interhaptics is a software company specializing in haptic technologies, offering tools for game developers to design high-quality, cross-platform haptic feedback. Its platform supports PC, Console, Mobile, and XR devices, simplifying haptic adoption and enhancing user experience in gaming titles across multiple platforms.
FundamentalVR is a healthcare technology company specializing in immersive VR training solutions for surgical skills development. Its platform, featuring HapticVR® technology, accelerates competency in medical professionals by providing realistic, interactive simulations, improving surgical accuracy, and enabling remote collaboration and training across global teams.
PUI Audio provides high-quality audio, haptic, and sensor components. It offers customized solutions across various industries, including medical, industrial, and consumer electronics. Known for its innovation and superior customer service, it ensures top-tier performance through comprehensive testing and design support.
More in News
Monday, November 24, 2025
The geospatial industry is witnessing a shift that is as significant as the transition from theodolites to GPS. At the epicenter of this transformation is the convergence of Unmanned Aerial Vehicles (UAVs) and advanced photogrammetry. While aerial surveying has existed for a century, the field has shifted beyond simple photography into an era of computational photogrammetry. In this new phase, high-resolution imagery is transformed into mathematically rigorous, centimeter-accurate 3D terrain models, democratizing high-precision data. This evolution is not merely about capturing a bird’s-eye view; it is about digitizing the physical world. Modern drone surveying workflows now allow surveyors, engineers, and land managers to reconstruct reality with a level of fidelity that rivals traditional terrestrial methods, but with exponentially higher speed and coverage. The process converts 2D pixels into 3D coordinates, transforming flat images into actionable spatial data. Flight Geometry and Sensor Fidelity High-fidelity 3D modeling depends fundamentally on the quality and precision of data acquisition, beginning with the sensor technology used during capture. Modern survey-grade drones now employ mechanical global shutters that eliminate the geometric distortions associated with electronic rolling shutters, particularly during high-speed flight. This advancement ensures each frame preserves accurate spatial relationships. Equally important is the flight path: photogrammetry relies on parallax, which is achieved through structured-grid missions designed to maintain high forward (75–80 percent) and side (60–70 percent) overlap. Such redundancy enables software to triangulate depth by observing the same ground features from multiple perspectives. Ground Sampling Distance (GSD) has further become the benchmark for evaluating resolution, with lower GSD values directly correlating with more detailed and reliable terrain outputs. To complement nadir imagery, current workflows incorporate oblique captures—typically at 30–45 degrees—to enhance the reconstruction of vertical faces, built structures, and complex landscapes. While nadir images provide strong planar accuracy, oblique perspectives introduce critical side-wall visibility, allowing models to transition from simple surface projections to fully realized volumetric representations. This integrated approach ensures that modern 3D models deliver both geometric accuracy and comprehensive spatial completeness. Algorithmic Alchemy: Structure from Motion (SfM) and Point Clouds Once data acquisition is complete, the primary workload shifts from the drone to the processing workstation, where photogrammetric reconstruction begins. This process is powered by Structure from Motion (SfM), an advanced algorithmic technique that simultaneously estimates both camera parameters and scene geometry—an improvement over traditional photogrammetry, which required predefined camera positions. The system performs feature extraction by scanning thousands of images to identify millions of key points, such as pavement edges, rocks, and distinct surface textures. These features are then matched across overlapping images, allowing the software to track specific points captured from different viewpoints. When a point is identified across multiple photos, its precise three-dimensional position can be determined by triangulation using collinearity principles. This process produces a sparse point cloud that serves as the initial geometric framework for the terrain. Subsequently, a bundle block adjustment refines this framework through rigorous mathematical optimization, minimizing discrepancies between observed and reconstructed point locations and ensuring a cohesive geometric solution. The culmination of these steps is the generation of a dense point cloud, which in modern workflows often comprises hundreds of millions of points. Each point includes both spatial coordinates and RGB values, resulting in a highly detailed, photorealistic representation of the surveyed area—often exceeding the density of traditional ground-based measurements. A critical enhancement to this workflow is the integration of Real-Time Kinematic (RTK) and Post-Processing Kinematic (PPK) positioning. By recording the drone’s position with centimeter-level accuracy at the moment each image is captured, the resulting point cloud is automatically aligned to the correct coordinate system. This significantly reduces reliance on physical Ground Control Points (GCPs), streamlines field operations, and maintains high global accuracy throughout the final dataset. From Data to Intelligence: Orthomosaics and Digital Elevation Models Photogrammetry derives its value from the deliverables produced from the point cloud, which have become standardized across the industry as orthomosaics and elevation models. An orthomosaic is not merely a stitched aerial panorama; it is a geometrically corrected image created through orthorectification using the underlying elevation model. This correction removes perspective distortion, eliminates scale variation caused by terrain relief, and produces a map-accurate image with consistent scale throughout. As a result, users can measure distances, areas, and angles directly on the orthomosaic with confidence. Advanced blending algorithms ensure seamless transitions between individual images, balancing color and exposure to create a continuous, uniform representation of the site. The 3D information derived from photogrammetry is further processed into grid-based elevation models, primarily distinguished as Digital Surface Models (DSMs) and Digital Terrain Models (DTMs). A DSM reflects the captured surface, including vegetation, structures, and other objects, making it valuable for applications such as line-of-sight analysis and obstruction assessment. In contrast, a DTM isolates bare earth by filtering out non-ground points using sophisticated classification algorithms, thereby generating an accurate representation of the underlying terrain. These models serve as the foundation for generating topographic contours, which modern software produces directly from the DTM, offering surveyors complete site coverage rather than relying on interpolated grid points. The dataset's volumetric nature enables precise stockpile volume calculations and detailed cut-and-fill analysis, supporting accurate earthwork planning by comparing existing conditions with design surfaces. Today, photogrammetry in drone surveying is defined by integration and automation. It is a workflow in which the physical acquisition of images and the digital reconstruction of geometry are tightly intertwined. By leveraging high-resolution sensors, precise flight paths, and powerful SfM algorithms, the industry has established a terrain-modeling method that is both scalable and scientifically rigorous.
Friday, November 21, 2025
FREMONT, CA: A wearable bioelectronics lab at the Georgia Institute of Technology at Northwestern University is developing innovative haptic patches, termed epidermal VR, to help people with neurological conditions, especially those with early-onset vision impairments. These patches use sensors to transmit information to haptic devices, much like VR goggles replicate visual experiences. The patches utilize actuators that operate at frequencies between 50 and 200 Hz, where the skin is most sensitive. These actuators can vibrate and apply pressure, requiring more force than typical vibration mechanisms. This small, battery-powered device achieves both functions using bistable magnetic materials and the skin's natural spring-like properties, making it more efficient than traditional, energy-heavy tethered devices. The bistable mechanism flips between states with a small burst of energy, similar to a light switch. The actuator uses a combination of vibration, pressing and rotation to convey information to the skin. Researchers are exploring the optimal designs for these channels. For instance, in a visual sensory replacement system, indentation patterns created by the actuators can alert users to the presence of objects, warn of potential collisions and indicate the distance to obstacles, helping them navigate their surroundings. By integrating LiDAR systems and related APIs that identify objects like chairs, walls and doors, vibration can also guide users toward specific locations. This epidermal VR system maps the environment and detects obstructions using LiDAR technology found in smartphones. This information is transmitted via Bluetooth to the haptic device for non-visual perception. Utilizing Apple's LiDAR APIs simplifies app development, with the phone handling image categorization and 3D reconstruction. Cloud processing may be incorporated to enhance the system's capabilities. A key innovation is using kirigami, a Japanese paper-cutting technique, to convert the actuator's linear and rotational motions. This allows for creating intricate mechanical stimuli on the skin, like sub-pixels, by positioning multiple actuators near each other. This enables the delivery of more complex tactile information. The research team is also exploring using neuromorphic computers and edge computing to further enhance the device's capabilities in the future. Currently, it uses a commercial System-on-a-Chip (SoC) with an ARM processor, Bluetooth stack and communication antenna. The lab makes the stimuli intuitive by linking them to natural sensory experiences. This lets users quickly learn the system, often within a couple of hours, by associating specific stimuli with visual locations. With practice, users can automatically identify an object's location based solely on the sensation. The lab aims to aid individuals who have lost sensation in their feet due to neurological conditions like stroke or spinal cord injuries. The haptic patches could assist gait and balance by enhancing sensory feedback, making walking easier and safer. This is achieved by delivering precise tactile cues to the feet, helping users regain awareness of their foot placement and improve their balance.
Friday, November 21, 2025
FREMONT, CA: Quantum computing, an emerging technology frontier, promises to revolutionize defense technology. By leveraging the principles of quantum mechanics, this nascent field is poised to reshape military strategies, cybersecurity, and even logistics on a global scale. Quantum computing is poised to remodel various aspects of defense operations, offering unprecedented opportunities and significant challenges. One of the most critical areas is cryptography and cybersecurity. While quantum computers can break traditional encryption methods, quantum cryptography—such as Quantum Key Distribution (QKD)—provides near-impenetrable security for military communications. In response to the looming threat of quantum attacks, governments and organizations are developing quantum-resistant algorithms to secure sensitive data. Leading nations, including China, have deployed QKD networks to safeguard military communication lines. Another key application lies in logistics optimization and mission planning. Quantum computing efficiently resolves complex logistical challenges, including supply chain management, resource allocation, and real-time decision-making. Integrating AI-quantum synergy has led to significant advancements in military strategy, enhancing the precision of mission planning. Additionally, quantum-assisted simulations allow defense forces to model battlefield scenarios with unparalleled accuracy, improving operational preparedness. Quantum technology introduces groundbreaking capabilities in surveillance and reconnaissance. Quantum sensors provide ultra-precise measurements, enhancing radar systems and submarine detection. A notable advancement is quantum radar, which has the potential to detect stealth aircraft, a capability being explored by significant defense powers. Furthermore, satellite-based quantum sensors can detect subtle gravitational and magnetic field variations, offering enhanced intelligence-gathering capabilities. AI integration in defense is expected to reach new heights with quantum computing. Quantum-powered real-time threat analysis enables military systems to anticipate and neutralize threats autonomously. Moreover, research is advancing into autonomous weapons that leverage quantum computing for improved decision-making, particularly in drone and unmanned weapons platforms. As quantum technologies evolve, they will play a pivotal role in shaping the future of military strategy and defense infrastructure. While integrating quantum technology into defense remains early, rapid advancements highlight its potential to redefine national security. Governments and private organizations invest heavily in quantum initiatives to ensure technological superiority in the coming decades. Continued advancements in hardware, software, and cross-disciplinary collaborations will be critical in unlocking its full potential. Quantum computing holds the key to unprecedented advancements in defense technology. It offers capabilities that can redefine national security, from secure communications to superior intelligence gathering and beyond. However, this potential comes with its own set of challenges and responsibilities. The global defense community can harness quantum computing to build a more secure and advanced future by addressing these.
Friday, November 21, 2025
Fremont, CA: Image sensors, connected especially with digital cameras, have developed as crucial components in the current world. These microscopic silicon chips, intended to transform light into electrical signals, are now omnipresent, powering gadgets ranging from mobile electronics to interplanetary instruments and crucial medical apparatus. An analysis of the numerous applications where these crucial elements are clearly influential is necessary due to their wide-ranging and significant influence. The Everyday Revolution The most prominent and influential application of image sensors is evident in smartphones. What began as a modest feature has transformed into an advanced imaging system, incorporating multiple lenses, computational photography, and resolutions once exclusive to professional cameras. These sensors not only allow users to capture fleeting moments and produce high-quality videos but also enable functionalities such as facial recognition for secure and seamless access. Beyond smartphones, image sensors are now integral to a wide range of consumer devices. In laptops and webcams, they facilitate video calls, online meetings, and the creation of digital content. Action cameras, such as GoPros, leverage them to record high-intensity adventures in remarkable detail, even under demanding conditions. Drones rely on image sensors for aerial photography and videography, revolutionizing how both hobbyists and professionals capture perspectives. Similarly, smart doorbells and security cameras enhance home security and provide remote monitoring, delivering convenience and peace of mind. Continuous advancements in this field are pushing the limits of sensor size, sensitivity, and processing power, bringing sophisticated imaging capabilities to billions worldwide. Image sensors extend far beyond everyday applications, serving as critical “eyes” in aerospace and satellite systems where the demands are uniquely stringent. In these environments, sensors must demonstrate exceptional radiation tolerance, unwavering reliability, and the ability to function in the vacuum of space and under extreme temperatures. Earth observation satellites depend on highly specialized sensors to monitor weather patterns, track climate change, map land use, detect deforestation, and support disaster response—providing data essential for scientific research, environmental stewardship, and economic planning. Space telescopes such as Hubble and James Webb rely on ultra-sensitive sensors to capture faint light from distant galaxies, nebulae, and exoplanets, unlocking insights into the origins and evolution of the universe. Similarly, planetary rovers and probes, like NASA’s Mars missions, employ rugged sensors to deliver panoramic views, analyze geological formations, and search for signs of life. Across these applications, the challenge lies in engineering sensors that can endure extreme conditions while offering exceptional clarity, a broad spectral range, and the ability to operate well beyond the visible spectrum. Medical Imaging Digital X-ray detectors and Computed Tomography (CT) scanners employ advanced sensors to generate detailed images of bones, organs, and soft tissues, enabling the detection of fractures, tumors, and internal injuries. Ultrasound machines rely on transducers that emit and capture sound waves, translating echoes into real-time images of internal body structures—indispensable in prenatal care, cardiology, and the examination of soft tissues. Miniaturized sensors embedded in endoscopic and laparoscopic instruments allow physicians to visualize internal organs, such as the digestive tract and lungs, or to perform minimally invasive surgeries with enhanced precision. Likewise, high-resolution sensors integrated with microscopes are essential in research and diagnostics, providing detailed views of cells, bacteria, and other microscopic structures. Across all these applications, medical imaging demands sensors with high sensitivity, low noise, and exceptional spatial resolution to capture the subtle details crucial for accurate diagnosis and effective treatment. The image sensor, initially a specialized component, has evolved into a foundational technology that consistently redefines human perception, comprehension, and interaction with the surroundings. Its progression from smartphones to satellites exemplifies human ingenuity and the limitless capabilities of light-sensing technology.
Friday, November 21, 2025
Fremont, CA: The modern business landscape is undergoing a rapid, technology-driven transformation. Artificial Intelligence (AI), cloud computing, and automation are no longer future concepts—they are the core engines of present-day operational efficiency and innovation. For organizations to not merely survive, in this new era, they must strategically invest in their most valuable asset: their people. Upskilling the workforce in applied tech is not just a cost—it is a competitive imperative. The Urgency of the Tech Skills Gap The rapid pace of technological advancement—accelerated further by generative AI—has widened the global tech skills gap. According to the World Economic Forum, more than 60% of employees will require reskilling by 2027 as automation reshapes roles across industries. Organizations that fail to address this gap face operational inefficiencies, slower innovation cycles, and rising employee anxiety driven by fears of job displacement. While many companies attempt to address this challenge by outsourcing scarce, costly tech talent, a more sustainable and strategically advantageous approach lies in developing internal capabilities. Investing in the existing workforce strengthens loyalty, leverages institutional knowledge, and ensures that newly acquired skills can be immediately applied to the organization’s specific operational and strategic needs. Key Technology Focus Areas Effective upskilling must center on three interconnected pillars of modern applied technology. AI and Machine Learning training should equip employees to use generative AI tools, interpret AI-driven analytics, and understand the ethical and strategic considerations of AI adoption—shifting the focus from building models to enabling AI-augmented decision-making. Further, cloud computing remains the backbone of digital operations, making training in cloud architecture, security, cost optimization, and cloud-native development essential for scalable and resilient systems. Automation—including RPA and low-code/no-code workflow platforms—empowers employees to identify and automate repetitive tasks, freeing them to focus on higher-value, creative, and strategic work. A successful upskilling initiative must integrate these technical capabilities with a structured, continuous learning framework: assessing skills gaps against business goals, offering personalized and interactive learning experiences such as microlearning and hands-on sandbox environments, and cultivating a culture where learning is embedded in daily work. As automation takes over routine tasks, transversal skills—such as critical thinking, adaptability, ethical reasoning, and collaborative communication—become equally critical, enabling employees to leverage technology responsibly and solve complex, non-routine problems that machines cannot. The investment in upskilling is an investment in future-proofing the organization. Companies that proactively train their employees in AI, cloud, and automation will unlock substantial benefits: reduced operational costs, faster innovation cycles, higher employee retention, and a significant competitive edge. By treating the workforce not as a static resource but as an evolving capability, businesses can transform the disruptive power of applied technology into a force for growth, creating a more agile, intelligent, and human-centric future of work.
Friday, November 21, 2025
Fremont, CA: Aerial surveys are a cornerstone of modern geospatial intelligence, providing high-resolution imagery for everything from urban planning and environmental monitoring to disaster response and precision agriculture. However, the raw data captured by aircraft or drones is just the beginning. The crucial bridge between a collection of digital photographs and a meaningful map or 3D model is the sophisticated process of data processing. This transformation turns raw imagery into actionable geospatial insights. The Starting Line: Raw Imagery Acquisition Aerial survey processing begins with the acquisition of raw imagery using specialized sensors mounted on fixed-wing aircraft, helicopters, or Unmanned Aerial Vehicles (UAVs). At this stage, high-resolution overlapping photographs are captured at defined intervals to ensure sufficient coverage and redundancy. Each image is accompanied by critical metadata, including camera calibration parameters, GPS coordinates, and Inertial Measurement Unit (IMU) readings such as pitch, roll, and yaw. This information forms the foundational dataset for accurate photogrammetric reconstruction. Before photogrammetric processing can begin, the raw data undergoes a structured preparation phase. Images are transferred, organized, and checked for completeness or corruption. Camera calibration parameters are applied to correct lens distortions. At the same time, GPS and IMU data are refined—often through Post-Processed Kinematic (PPK) or Real-Time Kinematic (RTK) techniques—to achieve centimeter-level positional accuracy. From Transformation to Insight: Processing, Modeling, and Quality Assurance Once the dataset is prepared, the core photogrammetric workflow begins. The process starts with feature extraction, during which thousands of common tie points are identified across overlapping images. These features enable robust image alignment through bundle adjustment. This mathematical optimization simultaneously computes the 3D coordinates of tie points and determines the precise position and orientation of each camera exposure. To ensure accurate georeferencing, Ground Control Points (GCPs) with surveyed coordinates are incorporated into the adjustment, anchoring the model to real-world spatial references. Following alignment, the workflow proceeds to dense cloud generation, producing millions—or even billions—of 3D points representing the surveyed terrain and visible objects. This dense point cloud forms the basis for generating a suite of geospatial products. Orthomosaic maps provide seamless, scale-accurate imagery suitable for mapping and planning; Digital Surface Models (DSMs) capture elevations of natural and built features; Digital Terrain Models (DTMs) isolate the bare-earth surface for hydrological and engineering applications; and photorealistic 3D mesh models support visualization, inspection, and virtual simulations. The final stage focuses on quality control and analytical outputs. Accuracy assessments ensure both absolute and relative precision, validated through independent checkpoints. Once verified, the data is used to extract meaningful insights—ranging from volumetric calculations and change detection to detailed feature extraction for infrastructure, land management, or environmental analysis. Through rigorous photogrammetric principles and structured quality assurance, raw aerial images evolve into authoritative, measurable geospatial products that support precise, data-driven decision-making across industries. Ultimately, effective data processing moves the aerial survey from a mere photographic record to a powerful geospatial intelligence tool. As sensor technology advances and processing algorithms become more efficient, this field will continue to drive precision and certainty, empowering users to understand, manage, and shape the physical world with unprecedented fidelity.