top of page

After 30 years in industry and science, working for a sole proprietorship, a government agency, a consulting group, a multi-national software corporation, an academic department at a major university, an international industrial research lab,  and a healthcare startup , I want to try something different. Something more flexible, more resilient to change, and a role that will allow me to fully utilize the interdisciplinary skills and experience that I have gained by working in industry and academia.

 

The section below describes my consulting services in respect to four functional domains, and the following section enumerates the research topics I hope to pursue over the coming year.

 

Consulting Services

 

I created the Burton Praxis in order to provide consulting services as an individual consultant and to collaborate with others (individuals, companies, professional organizations, etc.) in strategic alliances and time-bound projects. My professional and academic experience spans four functional areas where I hold either a degree or certification: software engineering, program management, data science, and platform engineering.

 

 

 

 

 

 

 

 

 

 

 

 

 

However, the degrees and certifications don't really speak to the personal truth of my experience. Prior to my deep dive into technology I was a failed philosophy/literary criticism student who spent the ages of 14 to 25 working in bars, restaurants, and nightclubs. I started my career in technology as an autodidact without a degree and didn't finish my undergrad until two years into my role at Microsoft. I was extremely lucky to have gotten the opportunities I did. Many kind people stuck their necks out for me and without their help, I wouldn't have had all of the varied and rich work experiences that I have had.

 

Research

 

In addition to consulting services, I am hoping to create a small distributed research lab focused on exploring specific projects in three broad areas:

 

I. Re-imagining Abstraction in the Large:

 

Coming out of the LLVM compiler world, Multi-Level Intermediate Representation (MLIR) provides a way of describing the complex types of data, levels of compositionality, and the interdependence of hardware and software that make up present-day distributed, high-performance computing applications. For me, this places the attention where it should be, overcoming impedance mismatches between representations and implementations. A significant amount of my professional career has been focused in this area, which started with a project in 1994 to model documents as DAGs in SGML and trying to bridge new markup-aware technologies with legacy relational storage, management infrastructure, and mainstream desktop publishing tools.  Over the last three decades I've tackled innumerable representational mismatches in multiple domains including: normalizing metadata, taxonomy, and ontology representations into formal description logic, translating constituency and dependency structures into adjacency matrices, composing relational to object mappings for complex persistent objects in object-oriented languages,  deconstructing corpus documents into vector representations, extracting headed-phrases to transform into semantic relations, parsing medical records with custom schemes for search indexing and relations, and many others.  

 

II. Artificial Intelligence in the Small: Local Models, Humans-in-the-loop

 

I completed my PhD in 2016 as the software industry, the scientific-industrial complex, and academia were dealing with the emergence of deep learning. Although many would consider this bad timing, I feel I benefited immensely from a more traditional approach to the study of natural language processing and statistical machine learning and was fortunate enough to be exposed to multiple approaches and opinions about human language and computation, including rule-based, statistical machine learning, and deep learning approaches. My experience trying to build small worlds of connected tuples of relations in medical text via TF/IDF, a domain-specific vocabulary for radiology impressions, syntactic features, and semantic representations was humbling and helped me understand the limits of automated language technologies. That spirit of humility guides my current research. Although I will use LLMs as a source of distributional information about language, I have chosen not to work with or incorporate any generative AI technologies into my projects. I prefer to focus on small, transparent models, annotation, humans-in-the-loop, mixed approaches, and realistic goals.

 

III. HPC in the small: On-premise/Local High Performance Compute

 

I think local compute is important for both organizations and individuals and provides a bulwark against industry capture in the cloud computing space. Container technologies such as Docker and Kubernetes, and virtualization platforms such as KVM, Zen, and Hyper-V,  have been used in the cloud to deliver applications at scale. I think these technologies have their place on-premise as well. In combination with heterogeneous designs enabled by MLIR and LLVM, the availability of IoT devices and alternate micro-cluster hardware such as TuringPi, sophisticated small-scale compute environments can be built with on-premise cloud technologies. Connecting these local compute resources together, using tools such as Proxmox and XCP-ng, we can empower communities and regional organizations to work together to build models and applications independent of the current cloud ecosystem yet capable of using the same standards, protocols, and application software.

 

Please contact me directly for my availability or just to chat!

 

Sincerely,

Prescott Klassen, Ph.D.
prescott.klassen@burtonpraxis.com

Noli liberum prandium comedere

bottom of page