Our black-box approach (Tsert Method ©®™) in testing object-oriented programs is based on the use of protocol data units to communicate with a test-harness, which are built by processing the methods of a given class. Testing object-oriented programs has always been difficult, especially in handling inheritance and polymorphism. The approach to be presented, allows the tester, to test classes in a bottom-up manner, thereby handling inheritance and polymorphism, as the subclasses and classes are processed.
The use of Protocol Data Units (PDUs) eliminates the
                need to generate stubs for classes and constructors. Our
                black-box approach, by handling only publicly accessible
                constructs, retain one of the main benefits of
                object-oriented programs, which are data hiding and
                abstraction.
              
Deciphering
                  Unknown Languages
                  using
                  Glyph  Positional
                  Analysis
                
 We intend to show, that the deciphering of unknown
                languages, with the same
                  methodology used in our content and translation
                engine, is simpler and more effective. Our assumption is
                that all human languages are based on the same basic
                concepts and attributes; and that they are all glyph-based. They
                all require visual elements which are either, unitary or
                part of a group; constituting unitary elements of a
                language, as letters,
                  words, pictograms, and word or pictogram modifiers.
              
The concepts and attributes, of all human languages,
                relate to actors,  patients, qualifiers, and
                actions  (subjects,  objects, adjectives, and
                verbs.) These notions are used to associate meaning to a
                structurally deciphered language.
                
                The positional analysis is performed, iteratively, using
                a window consisting of  three words or ideograms.
                The process is the same used in our content and
                translation engine, see natural language
                processing  [1]. The difference is that the
                analysis starts from iteration
                  0, with no
                  assumption made about the nature of the
                language. These assumptions are: does the language use
                an alphabet or a collection of ideograms ? Is the
                language read left to right, right to left, up down, or
                down up, etc..
              
Iteration 0 is for example to type every glyph of a language like English, by giving it a different number. The glyph sets constituting letters will quickly appear through the statistical data. The second iteration is then to type every letter glyph sets, and repeat the analysis. Typed visual elements are seen as unitary elements of a language; when, after successive iterations, the significance of the statistical data warrants it.
The result, of the analysis, is the extraction of glyph
                sets and sequences, which can provide clues to the syntax and modifiers of a
                language; such as prefixes,
                  suffixes, progressive, tense, and plurals. The
                extracted sequences are sequences of typed visual elements, with
                a statistically high occurrence level, which can be seen
                as linguistic-based phrase structures.
                The set, of these sequences, is seen as the formal-grammar
                  syntax of the language, which can be
                represented as graphs, and studied with formal grammar
                  and graph algorithms.
              
The deciphering is completed, by comparing the
                extracted structures, with those of similarly analyzed
                languages. Context information, from anthropologists and archeologists, is
                also used to try to guess at possible actors, patients,
                qualifiers, and actions;  usually seen as nouns,
                adjectives, and verbs.
                
              
 CETE©®
                  a
                  Content-Enabled
                  Translation
                  Engine.
                
Salt Protocol©®
an
					
					Identity-Based
					
                      Authentication
					
					  Protocol
					  
                      using
					  
                      Synchronized Systems
The Salt protocol [patent pending] is our approach to the protection of Internet-based communication. Communication entities can reliably recognize each other in an non-private network, like the Internet, or more often referred to as the Web, without requiring a Secure Socket Layer (SSL) handshake and a certificate.
The Salt protocol, is an identity-based authentication protocol. It essentially requires a communication entity to identify itself with a specific access key, a sequence of bytes, generated by a cryptographic engine.
The protocol also requires, that two entities involved in a communication session, must be able to synchronize on a particular salt value, encryption algorithm, a cypher mode, an obfuscation mode, and a set of encryption characters. The set of required information is called the salt-setting.
The protocol also requires, that servers belonging to a given private network, remain synchronized with regards to salt settings, signatures, and user's public encryption keys.
 Our approach is, usually, referred to as
                an N-factor authentication protocol, using the
                salt-setting, as the shared secret. The SALT protocol,
                as with other modern Internet authentication protocols,
                rely on the Diffie-Hellman-Merkle
                key exchange, heretofore referred to as the
                Diffie-Hellman protocol, to initiate a shared secret
                exchange with unknown peers.
                
              
Breeze::OS
                  Reminder©®
                  Subsystem
                  a
                  Uniform,
                  Internet-Enabled,
                  &
                  Fragment-Based
                  Notification
				  System
Our approach is to systematically categorize and manage every type of event that an operating system or a user's interaction, with said operating system, can generate, with the use of visual notifications. Such notifications will, heretofore, be referred to as reminders
The Breeze::OS
                  Reminder©® [patent pending] subsystem provides a way
                to visually notify the user of events triggered on their
                desktop; it also provides a simple way for users to
                exchange messages with each other. Such reminders, akin
                to visual texting or email, (texting was in Unix systems
                with commands such as, mesg, and write), can be
                exchanged across the Internet. The transmission takes
                place with the use of HTTP and XML fragments.
                
              
				  
				Deriving
                  Messaging Schemas
                  from the
                  Reminder Schema
                
				  
				The
                  Display
                  of
                  Breeze::OS Reminders©®
                  through
                  Static & Animated
                  Pictograms
The Breeze::OS
                  Reminder©® subsystem's pictogrammic language is
                called Picto©®[patent pending] . It is based on the use
                of both static and animated images; and does for your
                desktop what traffic signs do for the road. Like all
                pictogram-based languages, combinations of pictograms
                can convey a given concept; said concepts are extracted
                by our content engine, which was developed to read text
                using natural language processing methodologies. The
                extraction of concepts is more difficult
                with the use of clustering methodologies.
                
              
PI
                  Desktop©®
                  a
                  Desktop 
                  with an 
                  In-Kernel,
                  Salted.
                  HTTP Daemon.
                
We try to
                show why a desktop running on top of an in-kernel HTTP
                daemon, is a simpler way to ensure secure, and rapid
                access to files from a file system. File systems that
                have the Guard feature like our TFS file system, can
                prevent direct access to files, and must therefore
                expect request to come from only the in-kernel HTTP
                daemon. The Salt and Guard features replace the Secure
                Linux(SE Linux) setup, which is in most part, difficult
                to manage.
              
The PI Interface & UI Toolkit©®
The PI Interface and toolkit [patent pending] are based on HTML and HTTP. Our UI toolkit relies on widgets and agents, which in HTML are identified by the OBJECT tag. Communication is based on HTTP requests, which take the form of queries for files, and tuples, lists of tuples, and maps of tuples. Our toolkit includes template-like processing [patent pending] of UI files, using Tsert.com tags, which relies on the retrieval of tuples.
When
                running on top of a content-engine,
                the document-centric
                [patent pending]
                feature of the PI Interface can be
                enabled; allowing interaction to be based on the type of
                document the user is accessing, creating or editing. For
                example, the content-engine scanning method is used to
                identify the type of document a user creates, by
                recognizing sequences of tokens/keywords which are
                specific to certain type of documents, .e.g letter or
                emails.
              
Agents or
                critters are self-executable scripts, plugins, or
                applications, whereas widgets are used for presentation
                with a Graphical User Interface (GUI). Both critters and
                widgets can respond to signals derived from the HTML
                ones. Time based signals, as well as, message reception
                signals are also used. 
              
The PI Interface UI file
                is used to entirely capture the functionality of
                critters, plugins, and applications. Actions can be
                mapped to HTTP
                requests [patent
                  pending], natively compiled source code as in a
                library; or as embedded script code -- javascript
                or t-script
                [patent pending].
                
              
              
T-Script©®
                  an
                  Object-Based 
                  Script
                
 T-Script [patent pending] 
                is the script used with our operating system. It has
                simple features and constructs. It is object-based, and
                relies on a small set of objects, which are variables,
                collections, timers, threads, channels, reminders, and
                widgets. It provides for polymorphism, and inheritance
                using registering statements and dynamic binding.
                Widgets can register, and un-register collections and
                methods. Scripts can load, and unload additional script
                procedures.
                
                Variables, collections, and widgets are polymorphic.
                Variables can be scalars, universal resource identifiers
                (uris), regular expressions, localized and
                internationalized date and time, and Tsert.com template
                tags.
              
Collections
                can be stacks, lists, maps, sets, vectors, queues,
                records, protocols, trees, PDU trees, XML trees, SQL
                cursors, SQL databases, search databases, matrices,
                graphs, and extended graphs (semantic networks). Widgets
                can be any widget provided by a toolkit library, e.g.
                Qt, Java, and GTK.
                
                We chose an object-based approach; because, we believe
                it is a simpler way to provide basic direct inheritance,
                than the object oriented one. The inheritance provided
                is dynamic, and can be completely changed; which gives
                rise to the possibility of writing adaptive scripts.
              
Script-based
applications
                are, by default and for security reasons, not granted
                direct access to any file-systems. Access is only
                granted through the SALT protocol and script signatures.
              
TFS Guard©®
                  Access Control 
                  for  the 
                  TFS File System.
                
 The TFS [patent pending] 
                file system, developed for our operating system, allows
                access to files, by identifying the source of the
                request, through a SALT handshake and a signature
                verification. Each application that is packaged for our
                OS must provide a signature, for every agent
                application, which may need direct access to files. We
                intend to show that our approach to access control, is a
                more efficient way to secure files on a file system,
                than the Secure Linux (SE Linux) one.
                
              
SaltFS©®
                  a
                  Crypting Interface
                  for the 
                  TFS File System
                
 
                  TFS©®
                  the 
                  Terabyte File System 
                  with
                  Search-Like 
                  File Retrieval
                  and
                  Secure Logging
                
The Terabyte File System (TFS©® [patent pending] ) was developed to improve interaction with agents or applications. The structure of the file system is made of links and vertices/nodes. Each vertex or node is a file node or inode; and the links are the paths to these vertices or nodes. It relies on a single-level inode storage structure with no folders, and unique keys pointing directly to files.
 The
                file-system is made-up of three distinct
                sections: vertex, index, and inode. What used to be
                folders are seen as vertices in a graph; since they are
                simply pointers to a location. Mount points can be
                mounted hidden. i.e. only a guard application can access files under
                the mount point; and every other applications must issue
                a request, using the SALT protocol.
                Indices are keys used for searches.
                
                Agents can make search-like requests for files. The
                requests are based on path-spec [patent pending]
                based semantics, where the search keywords are the words
                that constitute the file path. Just as in search
                requests to a web engine, path-spec search requests can
                be specified with boolean logic operators such as or,
                and, not and, etc..
                
                The TFS also includes a built-in notify feature, that
                agents can use, when requiring notifications of access
                events on certain files and links. The TFS also includes
                an extended attribute layer, dealing with content, which
                allows a text scanning agent to add content-based
                keywords to an inode.
                
                The last layer is an access control layer, called
                TFS-Guard©®, based on our SALT protocol and agent
                signatures. When a given path to a file is guarded, then
                any agent, requesting access to that particular file,
                must provide a SALT key to be granted access;
                additionally, the agent's signature must match the one
                that is stored by the file system.
The TFS has
                an built-in access log; which can only be erased and not
                modified, when the system is booted in maintenance mode.
                When the logging feature is enabled; every request
                to guarded files is logged. 
Modeling
                  Adaptive Behaviour
                  using
                  Retrieval, Storage, and Strength
                  of 
                  Memories 
                
              
Using
                  Character Traits
                  in
                  Managing
                  the
                  Strength of Memories
                  in a
                  Memory-Driven Adaptive system
                
              
The
                  Command-Map 
                  Engine (SCM)
                  of the 
                  Content-Enabled 
                  Breeze::OS Desktop©® 
The content engine is used to parse the typed text, and transform it into a computerese-based [2] command map. The semantic command map [ SCM patent pending ] comprises the set of actions, actors, objects, and their attributes; which were collected from the semantic content extracted from the user entered text.
The command map is then used to generate a set of specific desktop-related actions, in order to perform the specified user command. Voice commands can be parsed into text; and then fed to the command-map engine.
We intend
                to show, that our command-map based approach can be used
                to develop a completely generic command-response engine;
                which can be embedded into any system; and can also be
                used to develop a natural language based script [ t-escript patent pending
                ].
              
UTE©®
                  a 
                  Tag-Based 
                  Template Engine
                
ENET©®
 a
 Content-Enabled 
 Semantic Network
 Toolkit
 We intend to show that content based semantic
                networks, with the proper graph traversal routines, can
                be just as efficient and accurate as inference rule
                engines, at delivering information.
                
                Building the semantic network using content, as a basis,
                is to facilitate interaction with agents needing
                content-related information, such as search, or
                translation engines. Adding an additional layer that
                extracts inter-relationships between a given set of
                vertices, give rise to concept-based information
                retrieval. A given concept can be extracted from a set
                of path overlays [1], by examining the links between the
                vertices. 
The challenge, to using this approach, is to see if we
                can get a content-enabled, and natural language
                processing engine to understand concepts, such as, why can
                  a stone fly ?
                
                Relying on graph traversal, totally eliminates the
                weakness of an overly recursive inference rule engine;
                and, relationships between vertices are more easily
                extracted.
                
                Our natural language processing (NLP) engine, can be
                used to easily build semantic networks, by teaching it
                how to read dictionaries and thesauri. Our NLP engine
                can also, since it can read and understand unstructured
                text, build social
                  networks, using our semantic network toolkit.