Tsert.Com 


Skip to content, navigation.


  • Home
    Main page
  • Products
    Our products
  • Services
    Our services
  • Translation
    Our  translation engine
  • Thinktank
    Topics & Patents
  • Sales
    Cete.cloud
  • Contact
    Contact us!

Articles & Demo

Article Abstract

Using
a
PDU
and
Scenario-Based
Methodology
in
Testing
Object-Oriented
Programs


White-Papers Article

Pierre Innocent, Member, IEEE

Our black-box approach (Tsert Method ©®™) in testing object-oriented programs is based on the use of protocol data units to communicate with a test-harness, which are built by processing the methods of a given class. Testing object-oriented programs has always been difficult, especially in handling inheritance and polymorphism. The approach to be presented, allows the tester, to test classes in a bottom-up manner, thereby handling inheritance and polymorphism, as the subclasses and classes are processed.

The use of Protocol Data Units (PDUs) eliminates the need to generate stubs for classes and constructors. Our black-box approach, by handling only publicly accessible constructs, retain one of the main benefits of object-oriented programs, which are data hiding and abstraction.


Article Abstract

Natural Language
Understanding

Using
Word Type
Disambiguation

and
Semantic Networks

White-Papers Article  

Pierre Innocent, Member, IEEE

Our approach [patent pending] to natural language understanding and content analysis of unstructured text in non-ideogrammic languages (e.g. Latin, Slavic, Germanic, etc.) is anchored on the process of word-type disambiguation. The process itself, is based on the statistical analysis of source text written according to the normal usage of a language – how the language is used by native speakers, the same analysis must be done for jargon, and specialized domain languages such as legalese.

The statistical analysis is performed to extract probabilities of appearance of word types in a sequence of word tokens. Once the statistical analysis is performed, a rules set is created. The rules set is then used to improve the process of phrase structure analysis, content analysis, and translation of the unstructured source text.

Semantic networks (knowledge bases) and Natural Language Processing (NLP) based heuristics are used to weigh the word tokens, that were extracted from the source text, in order to build a network of semantically linked words giving the user some notion about the content of the text.

The relevance of our approach in building search and language deciphering engines is also discussed.


Article Abstract

Deciphering
Unknown Languages
using
Glyph  Positional
Analysis


White-Papers Article

Pierre Innocent, Member, IEEE

We intend to show, that the deciphering of unknown languages, with the same methodology used in our content and translation engine, is simpler and more effective. Our assumption is that all human languages are based on the same basic concepts and attributes; and that they are all glyph-based. They all require visual elements which are either, unitary or part of a group; constituting unitary elements of a language, as letters, words, pictograms, and word or pictogram modifiers.

The concepts and attributes, of all human languages, relate to actors,  patients, qualifiers, and actions  (subjects,  objects, adjectives, and verbs.) These notions are used to associate meaning to a structurally deciphered language.

The positional analysis is performed, iteratively, using a window consisting of  three words or ideograms. The process is the same used in our content and translation engine, see natural language processing  [1]. The difference is that the analysis starts from iteration 0, with no assumption made about the nature of the language. These assumptions are: does the language use an alphabet or a collection of ideograms ? Is the language read left to right, right to left, up down, or down up, etc..

Iteration 0 is for example to type every glyph of a language like English, by giving it a different number. The glyph sets constituting letters will quickly appear through the statistical data. The second iteration is then to type every letter glyph sets, and repeat the analysis. Typed visual elements are seen as unitary elements of a language; when, after successive iterations, the significance of the statistical data warrants it.

The result, of the analysis, is the extraction of glyph sets and sequences, which can provide clues to the syntax and modifiers of a language; such as prefixes, suffixes, progressive, tense, and plurals. The extracted sequences are sequences of typed visual elements, with a statistically high occurrence level, which can be seen as linguistic-based phrase structures. The set, of these sequences, is seen as the formal-grammar syntax of the language, which can be represented as graphs, and studied with formal grammar and graph algorithms.

The deciphering is completed, by comparing the extracted structures, with those of similarly analyzed languages. Context information, from anthropologists and archeologists, is also used to try to guess at possible actors, patients, qualifiers, and actions;  usually seen as nouns, adjectives, and verbs.

Article Abstract

CETE©®
a
Content-Enabled
Translation
Engine.


White-Papers Article

Pierre Innocent, Member, IEEE

Our translation system uses our content engine, which is based on natural language processing [1]; and, uses a patent pending NLP-based scanning method, as well as, semantic networks. The scanning is based on type disambiguation using statistical data (see NLP and Deciphering papers.)

Our translation system forgoes inter-linguas that most translation systems, apart from statistical-based ones, use. Instead, our system manipulates the parse tree of one language, and transforms it into one of the target language. The parse tree transformational rules are based on the basic grammar of any given language.

The parse tree is built by a text scanning layer, using the grammar parsing rules of a given language. The transformational rules are first applied; then, the semantic networks are queried for additional word/noun disambiguation information. The grammar rules for genre, and verb tense are then applied.

We intend to show, that our translation engine is a more accurate, and easily extensible one. It can, quickly, be adapted to process additional languages. Rule sets can be automatically generated, using an evolutionary methodology [2].


Article Abstract

Salt Protocol©®
an
Identity-Based
Authentication
Protocol
using
Synchronized Systems


White-Papers Article

Pierre Innocent, Member, IEEE

The Salt protocol [patent pending] is our approach to the protection of Internet-based communication. Communication entities can reliably recognize each other in an non-private network, like the Internet, or more often referred to as the Web, without requiring a Secure Socket Layer (SSL) handshake and a certificate.

The Salt protocol, is an identity-based authentication protocol. It essentially requires a communication entity to identify itself with a specific access key, a sequence of bytes, generated by a cryptographic engine.

The protocol also requires, that two entities involved in a communication session, must be able to synchronize on a particular salt value, encryption algorithm, a cypher mode, an obfuscation mode, and a set of encryption characters. The set of required information is called the salt-setting.

The protocol also requires, that servers belonging to a given private network, remain synchronized with regards to salt settings, signatures, and user's public encryption keys.

Our approach is, usually, referred to as an N-factor authentication protocol, using the salt-setting, as the shared secret. The SALT protocol, as with other modern Internet authentication protocols, rely on the Diffie-Hellman-Merkle key exchange, heretofore referred to as the Diffie-Hellman protocol, to initiate a shared secret exchange with unknown peers.

Article Abstract

Breeze::OS Reminder©®
Subsystem
a
Uniform,
Internet-Enabled,
&
Fragment-Based
Notification
System


White-Papers Article 

Pierre Innocent, Member, IEEE

Notification is an old concept in operating system design. Notification systems used to be implemented as system logs, which other tools would then use to generate reports. Some tools send email to the system manager when certain conditions arise.

Our approach is to systematically categorize and manage every type of event that an operating system or a user's interaction, with said operating system, can generate, with the use of visual notifications. Such notifications will, heretofore, be referred to as reminders

The Breeze::OS Reminder©® [patent pending] subsystem provides a way to visually notify the user of events triggered on their desktop; it also provides a simple way for users to exchange messages with each other. Such reminders, akin to visual texting or email, (texting was in Unix systems with commands such as, mesg, and write), can be exchanged across the Internet. The transmission takes place with the use of HTTP and XML fragments.

Article Abstract

Deriving
Messaging Schemas
from the
Reminder Schema


White-Papers Article

Pierre Innocent, Member, IEEE

Every type of message exchanges can be seen as essentially a reminder, to which attributes are associated.
Our goal is to support the view, that deriving an email schema, and other email-type messaging from the reminder one; leads to a more structured and efficient system of message exchange, storage, retrieval, and organization.
The added attributes relating to their display,  see Breeze::OS Reminder©® [patent pending],  allows the presentation of such messages to be visualized, akin to a textual summary of their content.

Other messages that are primarily visual, like advertising, can be directly presented to the user, using whatever underlying operating system mechanism is available.

The schemas derived from our reminder schema are, an email schema, a message of the day (Motd/Rotd) schema, a feed, an advertising schema, and a problem tracking schema. All five schemas are copyrighted and patent pending.

Article Abstract

The
Display
of
Breeze::OS Reminders©®
through
Static & Animated
Pictograms


White-Papers Article

Pierre Innocent, Member, IEEE

Notification systems usually present their information using text and images. The approach we adopted, for our reminder-based notification system, is to develop a complete pictogrammic language, which is able to represent visually, any information that a computer system can generate; and also any information that our content engine[1], can extract from text, transmitted or stored, on a computer.

The Breeze::OS Reminder©® subsystem's pictogrammic language is called Picto©®[patent pending] . It is based on the use of both static and animated images; and does for your desktop what traffic signs do for the road. Like all
pictogram-based languages, combinations of pictograms can convey a given concept; said concepts are extracted by our content engine, which was developed to read text using natural language processing methodologies. The extraction of concepts is more difficult with the use of clustering methodologies.

Article Abstract

PI Desktop©®
a
Desktop 
with an 
In-Kernel,
Salted.
HTTP Daemon.


White-Papers Article

Pierre Innocent, Member, IEEE

We try to show why a desktop running on top of an in-kernel HTTP daemon, is a simpler way to ensure secure, and rapid access to files from a file system. File systems that have the Guard feature like our TFS file system, can prevent direct access to files, and must therefore expect request to come from only the in-kernel HTTP daemon. The Salt and Guard features replace the Secure Linux(SE Linux) setup, which is in most part, difficult to manage.


Article Abstract

The PI Interface & UI Toolkit©®


White-Papers Article

Pierre Innocent, Member, IEEE

The PI Interface and toolkit [patent pending] are based on HTML and HTTP. Our UI toolkit relies on widgets and agents, which in HTML are identified by the OBJECT tag. Communication is based on HTTP requests, which take the form of queries for files, and tuples, lists of tuples, and maps of tuples. Our toolkit includes template-like processing [patent pending] of UI files, using Tsert.com tags, which relies on the retrieval of tuples.

When running on top of a content-engine, the document-centric [patent pending] feature of the PI Interface can be enabled; allowing interaction to be based on the type of document the user is accessing, creating or editing. For example, the content-engine scanning method is used to identify the type of document a user creates, by recognizing sequences of tokens/keywords which are specific to certain type of documents, .e.g letter or emails.

Agents or critters are self-executable scripts, plugins, or applications, whereas widgets are used for presentation with a Graphical User Interface (GUI). Both critters and widgets can respond to signals derived from the HTML ones. Time based signals, as well as, message reception signals are also used.

The PI Interface UI file is used to entirely capture the functionality of critters, plugins, and applications. Actions can be mapped to HTTP requests [patent pending], natively compiled source code as in a library; or as embedded script code -- javascript or t-script [patent pending].


Article Abstract

T-Script©®
an
Object-Based 
Script


White-Papers Article

Pierre Innocent, Member, IEEE

T-Script [patent pending]  is the script used with our operating system. It has simple features and constructs. It is object-based, and relies on a small set of objects, which are variables, collections, timers, threads, channels, reminders, and widgets. It provides for polymorphism, and inheritance using registering statements and dynamic binding. Widgets can register, and un-register collections and methods. Scripts can load, and unload additional script procedures.

Variables, collections, and widgets are polymorphic. Variables can be scalars, universal resource identifiers (uris), regular expressions, localized and internationalized date and time, and Tsert.com template tags.

Collections can be stacks, lists, maps, sets, vectors, queues, records, protocols, trees, PDU trees, XML trees, SQL cursors, SQL databases, search databases, matrices, graphs, and extended graphs (semantic networks). Widgets can be any widget provided by a toolkit library, e.g. Qt, Java, and GTK.

We chose an object-based approach; because, we believe it is a simpler way to provide basic direct inheritance, than the object oriented one. The inheritance provided is dynamic, and can be completely changed; which gives rise to the possibility of writing adaptive scripts.

Script-based applications are, by default and for security reasons, not granted direct access to any file-systems. Access is only granted through the SALT protocol and script signatures.


Article Abstract

TFS Guard©®
Access Control 
for  the 
TFS File System.


White-Papers Article

Pierre Innocent, Member, IEEE

The TFS [patent pending]  file system, developed for our operating system, allows access to files, by identifying the source of the request, through a SALT handshake and a signature verification. Each application that is packaged for our OS must provide a signature, for every agent application, which may need direct access to files. We intend to show that our approach to access control, is a more efficient way to secure files on a file system, than the Secure Linux (SE Linux) one.

Article Abstract

SaltFS©®
a
Crypting Interface
for the
TFS File System


White-Papers Article

Pierre Innocent, Member, IEEE


SaltFS©®  [patent pending] is simply a device driver providing a SALT protocol-based crypting interface to the TFS file system. The SALT protocol, is usually referred to as an n-factor authentication protocol; and, allows crypting based on many identifying attributes. Each user, and each path can have their own set of SALT keys; which implies that any given file could be encrypted with any given encryption algorithm. The only weakness is the requirement of keeping a copy of the current set of keys, for an entire file system or drive, secured, for example on a USB key.

Article Abstract


TFS©®
the 
Terabyte File System 
with
Search-Like 
File Retrieval
and
Secure Logging


White-Papers Article

Pierre Innocent, Member, IEEE

The Terabyte File System (TFS©® [patent pending] ) was developed to improve interaction with agents or applications. The structure of the file system is made of links and vertices/nodes. Each vertex or node is a file node or inode; and the links are the paths to these vertices or nodes. It relies on a single-level inode storage structure with no folders, and unique keys pointing directly to files. 

The file-system is made-up of three distinct sections: vertex, index, and inode. What used to be folders are seen as vertices in a graph; since they are simply pointers to a location. Mount points can be mounted hidden. i.e. only a guard application can access files under the mount point; and every other applications must issue a request, using the SALT protocol. Indices are keys used for searches.

Agents can make search-like requests for files. The requests are based on path-spec [patent pending] based semantics, where the search keywords are the words that constitute the file path. Just as in search requests to a web engine, path-spec search requests can be specified with boolean logic operators such as or, and, not and, etc..

The TFS also includes a built-in notify feature, that agents can use, when requiring notifications of access events on certain files and links. The TFS also includes an extended attribute layer, dealing with content, which allows a text scanning agent to add content-based keywords to an inode.

The last layer is an access control layer, called TFS-Guard©®, based on our SALT protocol and agent signatures. When a given path to a file is guarded, then any agent, requesting access to that particular file, must provide a SALT key to be granted access; additionally, the agent's signature must match the one that is stored by the file system.

The TFS has an built-in access log; which can only be erased and not modified, when the system is booted in maintenance mode. When the logging feature is enabled; every request
to guarded files is logged.

Article Abstract

Modeling
Adaptive Behaviour
using
Retrieval, Storage, and Strength
of
Memories


White-Papers Article

Pierre Innocent, Member, IEEE

The choice of anchoring an adaptive system, on the concept of memories, is linked to how a human brain functions. The human brain is a gigantic memory storage system, and every single action find its trigger in the brain.
The method, our adaptive system uses, is to simply count on the memory storage mechanism to provide the
variability necessary, to allow the system to adjust to a new environment. Every time a particular memory is stored; the slight modifications which can accompany the storage of said memory, and its associated relations; may trigger a different behaviour on the part of the system, when said memory is retrieved anew.


Article Abstract

Using
Character Traits
in
Managing
the
Strength of Memories
in a
Memory-Driven Adaptive system


White-Papers Article

Pierre Innocent, Member, IEEE

The use of character traits, as in personality traits, in the implementation of a memory-driven adaptive system, came forth in the understanding of the basic model/schema of animal behaviour. The memory-driven store is, essentially, a knowledge base built as a directed weighted graph; and, the criteria used in the storage of memories are character traits, as well as, semantics, relevance, and frequency.


Article Abstract

The
Command-Map 
Engine (SCM)
of the 
Content-Enabled 
Breeze::OS Desktop©®


White-Papers Article

Pierre Innocent, Member, IEEE

The natural language features of the PI desktop is provided by our content engine [1]. It allows the use of natural language to issue commands to the desktop; by simply typing the said request or command.

The content engine is used to parse the typed text, and transform it into a computerese-based [2] command map. The semantic command map [ SCM patent pending ]  comprises the set of actions, actors, objects, and their attributes; which were collected from the semantic content extracted from the user entered text. 

The command map is then used to generate a set of specific desktop-related actions, in order to perform the specified user command. Voice commands can be parsed into text; and then fed to the command-map engine.

We intend to show, that our command-map based approach can be used to develop a completely generic command-response engine; which can be embedded into any system; and can also be used to develop a natural language based script [ t-escript patent pending ].


Article Abstract

UTE©®
a 
Tag-Based 
Template Engine


White-Papers Article

Pierre Innocent, Member, IEEE


UTE©®, a template engine, is simply a converter, which uses embedded tags, to expand a given document template into a fully notationed version of a document. Our in-kernel HTTP daemon [1] relies on a template engine, to output text in HTML, or any other notation, back to a client agent.

The tags are referred to as Tsert.com tags; and are derived from URIs. They have, therefore, a built-in recursive nature. There are several types of tags, a URI, ACTION, BLOCK, TEXT, TAG, and CUSTOM tag. They all include a conditional structure, based on retrieved key/value pairs. There are several types of ACTION tags, a GET, FRAGMENT, SELECT, SELECT_TAG, RETR, RETR_TAG action tag. There are a set of reserved tags, such as, locale, username, password, country, title, description, date, time, etc..

The template engine can, easily, be embedded into a script to provide, some of the website building features of the PHP language. In our script [2], templates are seen as protocols, that an opened channel uses to respond to a given client. The advantages of the combination of our script with the template engine is the complete separation of text and code. The text part is the text constituting an HTML page of a website; and, the code part is what is usually read, and executed by a script interpreter.


Article Abstract

ENET©®
a
Content-Enabled 
Semantic Network
Toolkit


White-Papers Article

Pierre Innocent, Member, IEEE

We intend to show that content based semantic networks, with the proper graph traversal routines, can be just as efficient and accurate as inference rule engines, at delivering information.

Building the semantic network using content, as a basis, is to facilitate interaction with agents needing content-related information, such as search, or translation engines. Adding an additional layer that extracts inter-relationships between a given set of vertices, give rise to concept-based information retrieval. A given concept can be extracted from a set of path overlays [1], by examining the links between the vertices. 

The challenge, to using this approach, is to see if we can get a content-enabled, and natural language processing engine to understand concepts, such as, why can a stone fly ?

Relying on graph traversal, totally eliminates the weakness of an overly recursive inference rule engine; and, relationships between vertices are more easily extracted.

Our natural language processing (NLP) engine, can be used to easily build semantic networks, by teaching it how to read dictionaries and thesauri. Our NLP engine can also, since it can read and understand unstructured text, build social networks, using our semantic network toolkit.

Article Abstract

Using Stenography
to
Increase
Writing Speed
with
Stylus-based Tablets

using
the  Breeze::OS Desktop


White-Papers Article

Pierre Innocent, Member, IEEE

The speed and accuracy of professional typists cannot be matched by any person using a stylus or pen.
Most computer writing interfaces settle on a keyboard, as the preferred mechanism, and typing as the mode of interaction. With modern writing interface implementations, the keyboard has become virtual.

Our approach [patent pending], was developed, based on the belief that the majority of people are more comfortable writing than typing. Our system uses our content engine (see NLP paper), for word and punctuation sequence matching and storage; and a stenography subsystem which is added to the character recognition engine.

Stenography-based rules are added to the content engine rule set; to allow mixing stenographic shorthands with actual text; without impeding the normal functioning of the natural language based interface of our Desktop.
The stenography subsystem and associated software allow a user to add their own set of shorthands, on top of the predefined ones already included in the content-engine.

We believe that our approach will satisfy, both average users and professional typists, who use computer tablets running our Breeze::OS Desktop.

Article Abstract

Mapping
Path-Spec
Queries

to
SQL

White-Papers Article

Pierre Innocent, Member, IEEE

We present, in this paper, a novel way of issuing database queries, which is less verbose and more apt to manipulation by an evaluation engine. The novel approach is based on our Path-Spec [patent pending] methodology for the retrieval of files stored on our file system.

Our novel approach, also [patent pending], was developed; so that a user could specify a database query, directly on the URL text entry widget of any browser; and to allow the full implementation of a database, using our TFS  Terabyte file-system as the storage engine. The approach is, like the Path-Spec methodology itself, based on Boolean expressions; but also includes additional features which are specific to database queries. These features are, for example, the default mapping of path-spec keywords as the ordered set of table, row, column, and value.
Each keyword, corresponding to a table, row, or column, can be replaced by the wild-card character '*'. The value  keyword can be any numeric or string literal, or a regular expression.

We believe that our approach will produce database clients that are easier to build and use.


Article Abstract

Salted Streaming Protocol (SSP)

White-Papers Article

Pierre Innocent, Member, IEEE

We present, in this paper, a new streaming protocol that is based on the Hypertext Transfer Protocol (HTTP).
The new protocol is called SSP for Salted Streaming Protocol [patent pending]. Like other streaming protocols; it relies on a control and a data port; where the communication protocol on the data port can be either SSP or RTP.
The transport protocols for both control and data ports can be either TCP or UDP.  The SSP protocol uses only HTTP headers for both control and data communication; and the 'X' prefix for non-generic headers. It allows for requests for re-transmission of data packets using the GET method, byte or time range specification in the 'Content-range' header. Some of the SSP headers are X-Session-Id, X-Source-Id, X-Conference, X-Blocksize, X-Bandwidth,  X-Audience, Content-duration, Content-title, Content-encryption, Content-collection, Document-type, Transport.

The SSP protocol requires that transmission of data packets be done using chunked encoding, and allows for encryption using the SALT protocol. With the SALT protocol, every packet which is transmitted, can be encrypted with a different key and algorithm.

We believe that our approach for an HTTP-based encrypted streaming protocol is simpler to implement and use.

Article Abstract

Salted Club Messaging Protocol (SCMP)

White-Papers Article

Pierre Innocent, Member, IEEE

We present, in this paper, a new streaming protocol that is based on the Hypertext Transfer Protocol (HTTP).
The new protocol is called SCMP for Salted Club Messaging Protocol [patent pending]. Like other messaging protocols; it relies on a control and a data port.

The transport protocols for both control and data ports can be either TCP or UDP.  The SCMP protocol uses only HTTP headers for both control and data communication; and the 'X' prefix for non-generic headers.  The SCMP protocol shares some of the same headers as the SSP protocol; such as, X-Session-Id, X-Source-Id, X-Conference, X-Blocksize, X-Bandwidth,  X-Audience, Content-duration, Content-title, Content-encryption, Content-collection, Document-type, Transport.

The SCMP protocol allows for encryption using the SALT protocol.

We believe that our approach for an HTTP-based encrypted streaming protocol is simpler to implement and use.


Back to Top

Home
Services
Contract
Sign-In
Register
Products
ITE
Linux
Search
Ferret
ENet
Builder
XTractor
Translation
White-Papers
Sales
Contact Us
 
 
1996-2012 Tsert.Com Design: David Kohout