A P S C C S

All Point Security Cohesive Codec System
All Point Security Cohesive Codec System
All Point Security Cohesive Codec System

Select a Section

  • Introduction
  • Data Security
  • Product Description
  • Sundry Use Cases
  • Novel Graphics Software
  • Data Management Scenario

Novel Graphics Software

The rendering of text, graphics and sound to a device may involve real-time processing of abstract data. Although the GPU is better equipped than a CPU in dynamic processing of graphics, they both need to work in tandem. Therefore, a balance of processing efficiency must be achieved between the power of a GPU and capacity of a CPU. This efficiency is possible when abstract processing is minimal and data to be rendered exists in a form that is compatible with the output device. An example scenario is skipping the texture, shading and other processing steps of a graphics pipeline to load pixel-ready data to a video output. The possible ways for producing pixel-ready data will be mentioned later in this section.

The static form of such data makes it suitable for an APSCCS compression that can later be decompressed for synchronized processing by both GPU and CPU. Furthermore, in an environment that lacks hardware acceleration, the CPU alone would be sufficient. The processing of such static graphics data would benefit from all efficient chunk dynamics provided by APSCCS. This efficiency arises from the capacity to select an optimal chunk size for any form of processing regardless of complexity. A media file with only APSCCS compression applied provides device-ready video and sound after a single decompression step. An optimal chunk size would facilitate device rendering of multiple frames at a pace that can be supplied smoothly by the host software.

Novel Graphics Software

This manner of rendering pixel-ready frames can be applied to other forms of computer graphics. The data that translates to pixels can be created at a drawing stage. This drawing stage would involve GPU calculations of texture, shading, lighting and other visual elements. These calculations apply current techniques for converting vertices to pixels. However, they are allowed to operate in fragments within a frame that aligns with the screen. The fragment size may be constrained for hardware that lacks a GPU. The visual changes for a fragment are user-controlled, but follow rules already established in a graphics pipeline.

A fragment creation may allow choices in frame resolution and bytes per pixel. This feature would give rise to flexible storage sizes for the graphics data produced. Also, any fragment or frame may be replicated as desired. A set of frames can be combined with others to produce the desired motion effect. The combination of frames can be stored in chunks of a selected size through APSCCS. The context for selecting chunks during playback would be defined by different graphics scenarios such as video games and animation. This partitioned granular control of graphics data (through a user interface) may be extended to all forms of media. A different approach to media compression may even be developed.

Previous Next
  • 1
  • 2
Copyright © 2025 AOA Incorporated; All Rights Reserved.