Welcome to the Rebel Labs
project space!
Below are some applied research problems Iβm thinking about.
Iβve tried to cast each as a βdeliverableβ to give a well-delineated scope, mission, and
real-world use case; for more on why I think this is a good idea,
see my essay.
If you want to know more, or get involved, email me at rebel@heptar.ch
.
Existing quantum
programming languages do not use natural, high-level
algorithmic representations, so they can't facilitate natural,
high-level algorithmic reasoning. The goal here would be new
algorithmic primitives (permitting, say, interoperability between different
computational frameworks) and the corresponding language design.
Existing quantum programming languages do not use natural, high-level algorithmic representations, so they can't facilitate natural, high-level algorithmic reasoning. The goal here would be new algorithmic primitives (permitting, say, interoperability between different computational frameworks) and the corresponding language design.
Details coming soon.
Decentralized money
is a cool idea in theory but tends to be slow, expensive and effectively
centralized in practice. The goal of this project is to explore the scalability, security,
and policy implications of untrusted centralization, i.e. when you don't
trust the bank. Can the bank convince you, mathematically, that it's
doing what you asked it to do?
Decentralized money is a cool idea in theory but tends to be slow, expensive and effectively centralized in practice. The goal of this project is to explore the scalability, security, and policy implications of untrusted centralization, i.e. when you don't trust the bank. Can the bank convince you, mathematically, that it's doing what you asked it to do?
Details coming soon.
Watermarking the
output of large language models is a problem of applied
cryptography. This project aims to explore techniques for
βbackmarkingβ (implanting watermarks during training) and βfrontmarkingβ
(layering watermarks over the output of a trained model). Optimistically, the outcome would be marking pipelines for LLM and diffusion models.
Watermarking the output of large language models is a problem of applied cryptography. This project aims to explore techniques for βbackmarkingβ (implanting watermarks during training) and βfrontmarkingβ (layering watermarks over the output of a trained model). Optimistically, the outcome would be marking pipelines for LLM and diffusion models.
Details coming soon.