88 changed files with 10382 additions and 862 deletions
@ -0,0 +1,119 @@
|
||||
# FROST P2TR wallet development details |
||||
|
||||
**NOTE**: minimal python version is python3.12 |
||||
|
||||
## FrostWallet storages |
||||
`FrostWallet` have two additional storages in addtion to wallet `Storage`: |
||||
- `DKGStorage` with DKG data |
||||
- `DKGRecoveryStorage` with DKG recovery data (unencrypted) |
||||
|
||||
They are loaded only for DKG/FROST support and not loaded on usual wallet |
||||
usage. |
||||
|
||||
Usual wallet usage interact with FROST/DKG functionality via IPC code in |
||||
`frost_ipc.py` (currently `AF_UNIX` socket for simplicity). |
||||
|
||||
## Structure of DKG data in the DKGStorage |
||||
|
||||
``` |
||||
"dkg": { |
||||
"sessions": { |
||||
"md_type_idx": session_id, |
||||
... |
||||
}, |
||||
"pubkey": { |
||||
"session_id": threshold_pubkey, |
||||
... |
||||
}, |
||||
"pubshares": { |
||||
"session_id": [pubshare1, pubshare2, ...], |
||||
... |
||||
}, |
||||
"secshare": { |
||||
"session_id": secshare, |
||||
... |
||||
}, |
||||
"hostpubkeys": { |
||||
"session_id": [hostpubkey1, hostpubkey2, ...], |
||||
... |
||||
}, |
||||
"t": { |
||||
"session_id": t, |
||||
... |
||||
} |
||||
} |
||||
``` |
||||
Where `md_type_idx` is a serialization in bytes of `mixdepth`, `address_type`, |
||||
`index` of pubkey as in the HD wallet derivations. |
||||
|
||||
## Overall information |
||||
In the code used twisted `asyncioreactor` in place of standard twisted reactor. |
||||
Initialization is done as early as possible in `jmclient/__init__.py`. |
||||
Classes for wallets: `TaprootWallet`, `FrostWallet` in the `jmclient/wallet.py` |
||||
Utility class `DKGManager` in the `jmclient/wallet.py`. |
||||
Engine classes `BTC_P2TR(BTCEngine)`, `BTC_P2TR_FROST(BTC_P2TR)` in the |
||||
`jmclient/cryptoengine.py`. |
||||
|
||||
## `scripts/wallet-tool.py` commands |
||||
|
||||
- `hostpubkey`: display host public key |
||||
- `servefrost`: run only as DKG/FROST support (separate process which need |
||||
to be run permanently) |
||||
- `dkgrecover`: recover DKG sessions from DKG recovery file |
||||
- `dkgls`: display FrostWallet DKG data |
||||
- `dkgrm`: rm FrostWallet DKG data by `session_id` list |
||||
- `recdkgls`: display Recovery DKG File data |
||||
- `recdkgrm`: rm Recovery DKG File data by `session_id` list |
||||
- `testfrost`: run only as test of FROST signing |
||||
|
||||
## Description of `jmclient/frost_clients.py` |
||||
|
||||
- `class DKGClient`: clent which support only DKG sessions over JM channels. |
||||
Uses `chilldkg` reference code from |
||||
https://github.com/BlockstreamResearch/bip-frost-dkg/, placed in the |
||||
`jmfrost/chilldkg_ref` package. |
||||
|
||||
Uses channel level commands `dkginit`, `dkgpmsg1`, `dkgcmsg1`, `dkgpmsg2`, |
||||
`dkgcmsg2`, `dkgfinalized` added to `jmdaemon/protocol.py`. |
||||
|
||||
NOTE: `dkgfinalized` is used to ensure all DKG party saw `dkgcmsg2` and |
||||
saved DKG data to wallet/recovery data. |
||||
|
||||
Commands in the `jmbase/commands.py`: `JMDKGInit`, `JMDKGPMsg1`, `JMDKGCMsg1`, |
||||
`JMDKGPMsg2`, `MDKGCMsg2`, `JMDKGFinalized`, `JMDKGInitSeen`, `JMDKGPMsg1Seen`, |
||||
`JMDKGCMsg1Seen`, `JMDKGPMsg2Seen`, `JMDKGCMsg2Seen`, `JMDKGFinalizedSeen`. |
||||
|
||||
Responders on the commands in the `jmclient/client_protocol.py`, |
||||
`jmdaemon/daemon_protocol.py`. |
||||
|
||||
In the DKG sessions the party which need new pubkey is named Coordinator. |
||||
|
||||
- `class FROSTClient(DKGClient)`: clent which support DKG/FROST sessions over |
||||
JM channels. Uses reference FROST code from |
||||
https://github.com/siv2r/bip-frost-signing/, placed in the |
||||
`jmfrost/frost_ref` package. |
||||
|
||||
Uses channel level commands `frostinit`, `frostround1`, `frostround2`, |
||||
`frostagg1` added to `jmdaemon/protocol.py`. |
||||
|
||||
Commands in the `jmbase/commands.py`: `JMFROSTInit`, `JMFROSTRound1`, |
||||
`JMFROSTAgg1`, `JMFROSTRound2`, `JMFROSTInitSeen`, `JMFROSTRound1Seen`, |
||||
`JMFROSTAgg1Seen`, `JMFROSTRound2Seen`. |
||||
|
||||
Responders on the commands in the `jmclient/client_protocol.py`, |
||||
`jmdaemon/daemon_protocol.py`. |
||||
|
||||
In the FROST sessions the party which need new signature is named Coordinator. |
||||
|
||||
## Recovery storage, recovery data file. |
||||
ChillDKG recovery data is placed in the unencrypted recovery file with |
||||
the name `wallet.jmdat.dkg_recovery`. Code of `class DKGRecoveryStorage` is |
||||
placed in `jmclient/storage.py` |
||||
|
||||
## Utility scripts |
||||
Currently changes in the code allow creation of unencrypted wallets, if |
||||
empty password is used. |
||||
- `scripts/bdecode.py`: allow decode wallet/recovery data files to stdout. |
||||
- `scripts/bencode.py`: allow allow encode text file to bencode format. |
||||
Separate options is presented to encode with DKG data file magic or DKG |
||||
recovery data file magic. |
||||
@ -0,0 +1,68 @@
|
||||
# FROST P2TR wallet usage |
||||
|
||||
**NOTE**: minimal python version is python3.12 |
||||
|
||||
To use FROST P2TR wallet you need (example for 2 of 3 FROST signing): |
||||
|
||||
1. Add `txindex=1` to `bitcoin.conf`. This options is need to get non-wallet |
||||
transactions with `getrawtransaction`. This data is need to perform signing |
||||
of P2TR inputs. |
||||
|
||||
2. Set `frost = true` in the `POLICY` section of `joinmarket.cfg`: |
||||
``` |
||||
[POLICY] |
||||
... |
||||
# Use FROST P2TR SegWit wallet |
||||
frost = true |
||||
``` |
||||
|
||||
3. Create bitcoind watchonly descriptors wallet: |
||||
``` |
||||
bitcoin-cli createwallet "wallet_name" true true |
||||
``` |
||||
where `true true` is: |
||||
> `disable_private_keys` |
||||
> Disable the possibility of private keys |
||||
> (only watchonlys are possible in this mode). |
||||
|
||||
> `blank` |
||||
> Create a blank wallet. A blank wallet has no keys or HD seed. |
||||
|
||||
4. Get `hostpubkey` for wallet by running: |
||||
``` |
||||
scripts/wallet-tool.py wallet.jmdat hostpubkey |
||||
... |
||||
021e99d8193b95da10f514556e98882bc2cebfd0ee0711fa71006cbc9e9a135b43 |
||||
``` |
||||
|
||||
5. Repeat steps 1-4 for other FROST group wallets. |
||||
|
||||
6. Gather hostpubkeys from step 4 and place to the `FROST` section |
||||
of `joinmarket.cfg` as the `hostpubkeys` value separated by `,`. |
||||
|
||||
7. Add `t` (threshold) value to the `FROST` section of `joinmarket.cfg`: |
||||
``` |
||||
[FROST] |
||||
hostpubkeys = 021e99d8193b95da...,03a2f4ce928da0f5...,02a1e2ee50187f3e... |
||||
t = 2 |
||||
``` |
||||
|
||||
8. Run permanent FROST processes with `servefrost` command on `wallet1`, |
||||
`wallet2`, `wallet3`: |
||||
``` |
||||
scripts/wallet-tool.py wallet.jmdat servefrost |
||||
``` |
||||
|
||||
9. Run `display` command on `wallet1` |
||||
``` |
||||
scripts/wallet-tool.py wallet.jmdat display |
||||
``` |
||||
The process of DKG sessions will start to generate pubkeys for the |
||||
wallet addresses. This can take several minutes. |
||||
|
||||
10. Repeat step 9 to generate pubkeys for `wallet2`, `wallet3`. |
||||
|
||||
11. Test FROST signing with `testfrost` command |
||||
``` |
||||
scripts/wallet-tool.py wallet.jmdat testfrost |
||||
``` |
||||
@ -0,0 +1,27 @@
|
||||
# Taproot P2TR wallet usage |
||||
|
||||
To use P2TR wallet you need: |
||||
|
||||
1. Add `txindex=1` to `bitcoin.conf`. This options is need to get non-wallet |
||||
transactions with `getrawtransaction`. This data is need to perform signing |
||||
of P2TR inputs. |
||||
|
||||
2. Set `taproot = true` in the `POLICY` section of `joinmarket.cfg`: |
||||
``` |
||||
[POLICY] |
||||
... |
||||
# Use Taproot P2TR SegWit wallet |
||||
taproot = true |
||||
``` |
||||
|
||||
3. Create bitcoind watchonly descriptors wallet: |
||||
``` |
||||
bitcoin-cli createwallet "wallet_name" true true |
||||
``` |
||||
where `true true` is: |
||||
> `disable_private_keys` |
||||
> Disable the possibility of private keys |
||||
> (only watchonlys are possible in this mode). |
||||
|
||||
> `blank` |
||||
> Create a blank wallet. A blank wallet has no keys or HD seed. |
||||
@ -0,0 +1,56 @@
|
||||
#!/usr/bin/env python3 |
||||
# -*- coding: utf-8 -*- |
||||
|
||||
import bencoder |
||||
import click |
||||
import json |
||||
from pprint import pprint |
||||
|
||||
|
||||
def debyte_list(lst): |
||||
res = [] |
||||
for item in lst: |
||||
if isinstance(item, bytes): |
||||
item = item.decode('ISO-8859-1') |
||||
elif isinstance(item, list): |
||||
item = debyte_list(item) |
||||
res.append(item) |
||||
return res |
||||
|
||||
|
||||
def debyte_dict(d): |
||||
res = {} |
||||
for k, v in d.items(): |
||||
if isinstance(k, bytes): |
||||
k = k.decode('ISO-8859-1') |
||||
if isinstance(v, dict): |
||||
v = debyte_dict(v) |
||||
elif isinstance(v, bytes): |
||||
v = v.decode('ISO-8859-1') |
||||
elif isinstance(v, list): |
||||
v = debyte_list(v) |
||||
res[k] = v |
||||
return res |
||||
|
||||
|
||||
CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help']) |
||||
@click.command(context_settings=CONTEXT_SETTINGS) |
||||
@click.option('-i', '--input-file', required=True, |
||||
help='Input file') |
||||
@click.option('-n', '--no-decode', is_flag=True, default=False, |
||||
help='Do not decode to ISO-8859-1') |
||||
def main(**kwargs): |
||||
input_file = kwargs.pop('input_file') |
||||
no_decode = kwargs.pop('no_decode') |
||||
with open(input_file, 'rb') as fd: |
||||
data = fd.read() |
||||
if no_decode: |
||||
d = bencoder.bdecode(data[8:]) |
||||
pprint(d) |
||||
else: |
||||
d = debyte_dict(bencoder.bdecode(data[8:])) |
||||
print(json.dumps(d, indent=4)) |
||||
|
||||
|
||||
if __name__ == '__main__': |
||||
main() |
||||
@ -0,0 +1,69 @@
|
||||
#!/usr/bin/env python3 |
||||
# -*- coding: utf-8 -*- |
||||
|
||||
import bencoder |
||||
import click |
||||
import json |
||||
from pprint import pprint |
||||
|
||||
|
||||
def enbyte_list(lst): |
||||
res = [] |
||||
for item in lst: |
||||
if isinstance(item, str): |
||||
item = item.encode('ISO-8859-1') |
||||
elif isinstance(item, list): |
||||
item = enbyte_list(item) |
||||
res.append(item) |
||||
return res |
||||
|
||||
|
||||
def enbyte_dict(d): |
||||
res = {} |
||||
for k, v in d.items(): |
||||
if isinstance(k, str): |
||||
k = k.encode('ISO-8859-1') |
||||
if isinstance(v, dict): |
||||
v = enbyte_dict(v) |
||||
elif isinstance(v, str): |
||||
v = v.encode('ISO-8859-1') |
||||
elif isinstance(v, list): |
||||
v = enbyte_list(v) |
||||
res[k] = v |
||||
return res |
||||
|
||||
|
||||
CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help']) |
||||
@click.command(context_settings=CONTEXT_SETTINGS) |
||||
@click.option('-i', '--input-file', required=True, |
||||
help='Unencoded file') |
||||
@click.option('-o', '--output-file', required=True, |
||||
help='Output file') |
||||
@click.option('-d', '--dkg-magic', is_flag=True, default=False, |
||||
help='Prepend dkg storage magic') |
||||
@click.option('-r', '--recovery-magic', is_flag=True, default=False, |
||||
help='Prepend recovery storage magic') |
||||
def main(**kwargs): |
||||
input_file = kwargs.pop('input_file') |
||||
output_file = kwargs.pop('output_file') |
||||
dkg_magic = kwargs.pop('dkg_magic') |
||||
recovery_magic = kwargs.pop('recovery_magic') |
||||
if dkg_magic and recovery_magic: |
||||
raise click.UsageError('Options -d and -r is mutually exclusive') |
||||
if dkg_magic: |
||||
MAGIC_UNENC = b'JMDKGDAT' |
||||
elif recovery_magic: |
||||
MAGIC_UNENC = b'JMDKGREC' |
||||
else: |
||||
MAGIC_UNENC = b'JMWALLET' |
||||
|
||||
with open(input_file, 'r') as fd: |
||||
data = json.loads(fd.read()) |
||||
data = enbyte_dict(data) |
||||
|
||||
with open(output_file, 'wb') as wfd: |
||||
wfd.write(MAGIC_UNENC + bencoder.bencode(data)) |
||||
|
||||
|
||||
if __name__ == '__main__': |
||||
main() |
||||
@ -1,6 +1,41 @@
|
||||
#!/usr/bin/env python3 |
||||
|
||||
import asyncio |
||||
import sys |
||||
|
||||
import jmclient # install asyncioreactor |
||||
from twisted.internet import reactor |
||||
|
||||
from jmbase import jmprint |
||||
from jmclient import wallet_tool_main |
||||
|
||||
|
||||
async def _main(): |
||||
try: |
||||
res = await wallet_tool_main("wallets") |
||||
if res: |
||||
jmprint(res, "success") |
||||
else: |
||||
jmprint("Finished", "success") |
||||
except SystemExit as e: |
||||
return e.args[0] if e.args else None |
||||
finally: |
||||
for task in asyncio.all_tasks(): |
||||
task.cancel() |
||||
if reactor.running: |
||||
reactor.stop() |
||||
|
||||
|
||||
if __name__ == "__main__": |
||||
jmprint(wallet_tool_main("wallets"), "success") |
||||
asyncio_loop = asyncio.get_event_loop() |
||||
main_task = asyncio_loop.create_task(_main()) |
||||
reactor.run() |
||||
if main_task.done(): |
||||
try: |
||||
exit_status = main_task.result() |
||||
if exit_status: |
||||
sys.exit(exit_status) |
||||
except asyncio.CancelledError: |
||||
pass |
||||
except Exception: |
||||
raise |
||||
|
||||
@ -1,11 +1,23 @@
|
||||
#!/usr/bin/env python3 |
||||
|
||||
import asyncio |
||||
|
||||
import jmclient # install asyncioreactor |
||||
from twisted.internet import reactor |
||||
|
||||
from jmbase import jmprint |
||||
from jmclient import YieldGeneratorBasic, ygmain |
||||
|
||||
# YIELD GENERATOR SETTINGS ARE NOW IN YOUR joinmarket.cfg CONFIG FILE |
||||
# (You can also use command line flags; see --help for this script). |
||||
|
||||
|
||||
async def _main(): |
||||
await ygmain(YieldGeneratorBasic, nickserv_password='') |
||||
jmprint("done", "success") |
||||
|
||||
|
||||
if __name__ == "__main__": |
||||
ygmain(YieldGeneratorBasic, nickserv_password='') |
||||
jmprint('done', "success") |
||||
asyncio_loop = asyncio.get_event_loop() |
||||
asyncio_loop.create_task(_main()) |
||||
reactor.run() |
||||
|
||||
@ -0,0 +1,244 @@
|
||||
# -*- coding: utf-8 -*- |
||||
|
||||
import asyncio |
||||
import pickle |
||||
|
||||
import jmbitcoin as btc |
||||
from jmbase.support import jmprint, EXIT_FAILURE, twisted_sys_exit, get_log |
||||
|
||||
|
||||
jlog = get_log() |
||||
|
||||
|
||||
class IPCBase: |
||||
|
||||
def encrypt_msg(self, msg_dict): |
||||
msg_bytes = pickle.dumps(msg_dict) |
||||
return btc.ecies_encrypt(msg_bytes, self.pubkey) + b'\n' |
||||
|
||||
def decrypt_msg(self, enc_bytes): |
||||
msg_bytes = btc.ecies_decrypt(self.wallet._hostseckey, enc_bytes) |
||||
return pickle.loads(msg_bytes) |
||||
|
||||
|
||||
class FrostIPCServer(IPCBase): |
||||
|
||||
def __init__(self, wallet): |
||||
self.loop = asyncio.get_event_loop() |
||||
self.wallet = wallet |
||||
self.pubkey = btc.privkey_to_pubkey(wallet._hostseckey) |
||||
self.sock_path = f'{wallet._storage.get_location()}.sock' |
||||
self.srv = None |
||||
self.sr = None |
||||
self.sw = None |
||||
self.tasks = set() |
||||
|
||||
async def async_init(self): |
||||
self.srv = await asyncio.start_unix_server( |
||||
self.handle_connection, self.sock_path) |
||||
|
||||
async def serve_forever(self): |
||||
return await self.srv.serve_forever() |
||||
|
||||
async def handle_connection(self, sr, sw): |
||||
if self.sr or self.sw: |
||||
jlog.error('FrostIPCServer.handle_connection: client ' |
||||
'already connected, ignore other connection attempt') |
||||
return |
||||
jlog.info('FrostIPCServer.handle_connection: connected new client') |
||||
self.sr = sr |
||||
self.sw = sw |
||||
await self.process_msgs() |
||||
|
||||
async def process_msgs(self): |
||||
while True: |
||||
try: |
||||
line_data = await self.sr.readline() |
||||
if not line_data: |
||||
if self.sr.at_eof(): |
||||
jlog.info('FrostIPCServer.process_msg: ' |
||||
'client disconnected') |
||||
self.sr = None |
||||
self.sw = None |
||||
while self.tasks: |
||||
task = self.tasks.pop() |
||||
task.cancel() |
||||
break |
||||
else: |
||||
jlog.error('FrostIPCServer.process_msg: ' |
||||
'empty line ignored') |
||||
continue |
||||
enc_bytes = line_data.strip() |
||||
msg_dict = self.decrypt_msg(enc_bytes) |
||||
msg_id = msg_dict['msg_id'] |
||||
cmd = msg_dict['cmd'] |
||||
data = msg_dict['data'] |
||||
task = None |
||||
if cmd == 'get_dkg_pubkey': |
||||
task = self.loop.create_task( |
||||
self.on_get_dkg_pubkey(msg_id, *data)) |
||||
elif cmd == 'frost_sign': |
||||
task = self.loop.create_task( |
||||
self.on_frost_sign(msg_id, *data)) |
||||
if task: |
||||
self.tasks.add(task) |
||||
except Exception as e: |
||||
jlog.error(f'FrostIPCServer.process_msgs: {repr(e)}') |
||||
await asyncio.sleep(0.1) |
||||
|
||||
async def on_get_dkg_pubkey(self, msg_id, mixdepth, address_type, index): |
||||
try: |
||||
wallet = self.wallet |
||||
dkg = wallet.dkg |
||||
new_pubkey = dkg.find_dkg_pubkey(mixdepth, address_type, index) |
||||
if new_pubkey is None: |
||||
client = wallet.client_factory.getClient() |
||||
frost_client = wallet.client_factory.client |
||||
frost_client.dkg_gen_list.append( |
||||
(mixdepth, address_type, index)) |
||||
await client.dkg_gen() |
||||
new_pubkey = dkg.find_dkg_pubkey(mixdepth, address_type, index) |
||||
if new_pubkey: |
||||
await self.send_dkg_pubkey(msg_id, new_pubkey) |
||||
except Exception as e: |
||||
jlog.error(f'FrostIPCServer.on_get_dkg_pubkey: {repr(e)}') |
||||
|
||||
async def send_dkg_pubkey(self, msg_id, pubkey): |
||||
try: |
||||
msg_dict = { |
||||
'msg_id': msg_id, |
||||
'cmd': 'dkg_pubkey', |
||||
'data': pubkey, |
||||
} |
||||
self.sw.write(self.encrypt_msg(msg_dict)) |
||||
await self.sw.drain() |
||||
except Exception as e: |
||||
jlog.error(f'FrostIPCServer.send_dkg_pubkey: {repr(e)}') |
||||
|
||||
async def on_frost_sign(self, msg_id, mixdepth, address_type, index, |
||||
sighash): |
||||
try: |
||||
wallet = self.wallet |
||||
client = wallet.client_factory.getClient() |
||||
frost_client = wallet.client_factory.client |
||||
dkg = wallet.dkg |
||||
dkg_session_id = dkg.find_session(mixdepth, address_type, index) |
||||
session_id, _, _ = client.frost_init(dkg_session_id, sighash) |
||||
sig, tweaked_pubkey = await frost_client.wait_on_sig(session_id) |
||||
pubkey = dkg.find_dkg_pubkey(mixdepth, address_type, index) |
||||
await self.send_frost_sig(msg_id, sig, pubkey, tweaked_pubkey) |
||||
except Exception as e: |
||||
jlog.error(f'FrostIPCServer.on_frost_sign: {repr(e)}') |
||||
|
||||
async def send_frost_sig(self, msg_id, sig, pubkey, tweaked_pubkey): |
||||
try: |
||||
msg_dict = { |
||||
'msg_id': msg_id, |
||||
'cmd': 'frost_sig', |
||||
'data': (sig, pubkey, tweaked_pubkey), |
||||
} |
||||
self.sw.write(self.encrypt_msg(msg_dict)) |
||||
await self.sw.drain() |
||||
except Exception as e: |
||||
jlog.error(f'FrostIPCServer.send_frost_sig: {repr(e)}') |
||||
|
||||
|
||||
class FrostIPCClient(IPCBase): |
||||
|
||||
def __init__(self, wallet): |
||||
self.loop = asyncio.get_event_loop() |
||||
self.msg_id = 0 |
||||
self.msg_futures = {} |
||||
self.wallet = wallet |
||||
self.pubkey = btc.privkey_to_pubkey(wallet._hostseckey) |
||||
self.sock_path = f'{wallet._storage.get_location()}.sock' |
||||
self.sr = None |
||||
self.sw = None |
||||
|
||||
async def async_init(self): |
||||
try: |
||||
self.sr, self.sw = await asyncio.open_unix_connection( |
||||
self.sock_path) |
||||
self.loop.create_task(self.process_msgs()) |
||||
except ConnectionRefusedError as e: |
||||
jmprint('No servefrost socket found. Run wallet-tool.py ' |
||||
'wallet.jmdat servefrost in separate console.', "error") |
||||
twisted_sys_exit(EXIT_FAILURE) |
||||
|
||||
async def process_msgs(self): |
||||
while True: |
||||
try: |
||||
line_data = await self.sr.readline() |
||||
if not line_data: |
||||
if self.sr.at_eof(): |
||||
jlog.info('FrostIPCClient.process_msg: ' |
||||
'client disconnected') |
||||
self.sr = None |
||||
self.sw = None |
||||
for msg_id, fut in list(self.msg_futures.items()): |
||||
fut = self.msg_futures.pop(msg_id) |
||||
fut.cancel() |
||||
break |
||||
else: |
||||
jlog.error('FrostIPCClient.process_msg: ' |
||||
'empty line ignored') |
||||
continue |
||||
enc_bytes = line_data.strip() |
||||
msg_dict = self.decrypt_msg(enc_bytes) |
||||
msg_id = msg_dict['msg_id'] |
||||
cmd = msg_dict['cmd'] |
||||
data = msg_dict['data'] |
||||
if cmd in ['dkg_pubkey', 'frost_sig']: |
||||
await self.on_response(msg_id, data) |
||||
except Exception as e: |
||||
jlog.error(f'FrostIPCClient.process_msgs: {repr(e)}') |
||||
await asyncio.sleep(0.1) |
||||
|
||||
async def on_response(self, msg_id, data): |
||||
fut = self.msg_futures.pop(msg_id, None) |
||||
if fut: |
||||
fut.set_result(data) |
||||
|
||||
async def get_dkg_pubkey(self, mixdepth, address_type, index): |
||||
jlog.debug(f'FrostIPCClient.get_dkg_pubkey for mixdepth={mixdepth}, ' |
||||
f'address_type={address_type}, index={index}') |
||||
try: |
||||
self.msg_id += 1 |
||||
msg_dict = { |
||||
'msg_id': self.msg_id, |
||||
'cmd': 'get_dkg_pubkey', |
||||
'data': (mixdepth, address_type, index), |
||||
} |
||||
self.sw.write(self.encrypt_msg(msg_dict)) |
||||
await self.sw.drain() |
||||
fut = self.loop.create_future() |
||||
self.msg_futures[self.msg_id] = fut |
||||
await fut |
||||
pubkey = fut.result() |
||||
jlog.debug('FrostIPCClient.get_dkg_pubkey successfully got pubkey') |
||||
return pubkey |
||||
except Exception as e: |
||||
jlog.error(f'FrostIPCClient.get_dkg_pubkey: {repr(e)}') |
||||
|
||||
async def frost_sign(self, mixdepth, address_type, index, sighash): |
||||
jlog.debug(f'FrostIPCClient.frost_sign for mixdepth={mixdepth}, ' |
||||
f'address_type={address_type}, index={index}, ' |
||||
f'sighash={sighash.hex()}') |
||||
try: |
||||
self.msg_id += 1 |
||||
msg_dict = { |
||||
'msg_id': self.msg_id, |
||||
'cmd': 'frost_sign', |
||||
'data': (mixdepth, address_type, index, sighash), |
||||
} |
||||
self.sw.write(self.encrypt_msg(msg_dict)) |
||||
await self.sw.drain() |
||||
fut = self.loop.create_future() |
||||
self.msg_futures[self.msg_id] = fut |
||||
await fut |
||||
sig, pubkey, tweaked_pubkey = fut.result() |
||||
jlog.debug('FrostIPCClient.frost_sign successfully got signature') |
||||
return sig, pubkey, tweaked_pubkey |
||||
except Exception as e: |
||||
jlog.error(f'FrostIPCClient.frost_sign: {repr(e)}') |
||||
return None, None, None |
||||
@ -0,0 +1,19 @@
|
||||
# -*- coding: utf-8 -*- |
||||
|
||||
# chilldkg_ref, secp256k1proto code is from |
||||
# https://github.com/BlockstreamResearch/bip-frost-dkg |
||||
# |
||||
# commit 1731341f04157592e2f184cb00a37c4d331188e3 |
||||
# Author: Tim Ruffing <me@real-or-random.org> |
||||
# Date: Wed Dec 18 23:42:26 2024 +0100 |
||||
# |
||||
# text: Use links for internal references |
||||
|
||||
# frost_ref is from |
||||
# https://github.com/siv2r/bip-frost-signing |
||||
# |
||||
# commit 2f249969f84c1533671c521bf864fddecb371018 |
||||
# Author: siv2r <siv2ram@gmail.com> |
||||
# Date: Sat Dec 7 17:13:54 2024 +0530 |
||||
# |
||||
# spec: add header, changelog, and acknowledgements |
||||
@ -0,0 +1,3 @@
|
||||
# -*- coding: utf-8 -*- |
||||
|
||||
__all__ = ["chilldkg"] |
||||
@ -0,0 +1,841 @@
|
||||
"""Reference implementation of ChillDKG. |
||||
|
||||
WARNING: This code is slow and trivially vulnerable to side channel attacks. Do |
||||
not use for anything but tests. |
||||
|
||||
The public API consists of all functions with docstrings, including the types in |
||||
their arguments and return values, and the exceptions they raise; see also the |
||||
`__all__` list. All other definitions are internal. |
||||
""" |
||||
|
||||
from secrets import token_bytes as random_bytes |
||||
from typing import Any, Tuple, List, NamedTuple, NewType, Optional, NoReturn, Dict |
||||
|
||||
from ..secp256k1proto.secp256k1 import Scalar, GE |
||||
from ..secp256k1proto.bip340 import schnorr_sign, schnorr_verify |
||||
from ..secp256k1proto.keys import pubkey_gen_plain |
||||
from ..secp256k1proto.util import int_from_bytes, bytes_from_int |
||||
|
||||
from .vss import VSSCommitment |
||||
from . import encpedpop |
||||
from .util import ( |
||||
BIP_TAG, |
||||
tagged_hash_bip_dkg, |
||||
ProtocolError, |
||||
FaultyParticipantOrCoordinatorError, |
||||
FaultyCoordinatorError, |
||||
UnknownFaultyParticipantOrCoordinatorError, |
||||
FaultyParticipantError, |
||||
) |
||||
|
||||
__all__ = [ |
||||
# Functions |
||||
"hostpubkey_gen", |
||||
"params_id", |
||||
"participant_step1", |
||||
"participant_step2", |
||||
"participant_finalize", |
||||
"participant_investigate", |
||||
"coordinator_step1", |
||||
"coordinator_finalize", |
||||
"coordinator_investigate", |
||||
"recover", |
||||
# Exceptions |
||||
"HostSeckeyError", |
||||
"SessionParamsError", |
||||
"InvalidHostPubkeyError", |
||||
"DuplicateHostPubkeyError", |
||||
"ThresholdOrCountError", |
||||
"ProtocolError", |
||||
"FaultyParticipantOrCoordinatorError", |
||||
"FaultyCoordinatorError", |
||||
"UnknownFaultyParticipantOrCoordinatorError", |
||||
"RecoveryDataError", |
||||
# Types |
||||
"SessionParams", |
||||
"DKGOutput", |
||||
"ParticipantMsg1", |
||||
"ParticipantMsg2", |
||||
"CoordinatorInvestigationMsg", |
||||
"ParticipantState1", |
||||
"ParticipantState2", |
||||
"CoordinatorMsg1", |
||||
"CoordinatorMsg2", |
||||
"CoordinatorState", |
||||
"RecoveryData", |
||||
] |
||||
|
||||
|
||||
### |
||||
### Equality check protocol CertEq |
||||
### |
||||
|
||||
|
||||
def certeq_message(x: bytes, idx: int) -> bytes: |
||||
# Domain separation as described in BIP 340 |
||||
prefix = (BIP_TAG + "certeq message").encode() |
||||
prefix = prefix + b"\x00" * (33 - len(prefix)) |
||||
return prefix + idx.to_bytes(4, "big") + x |
||||
|
||||
|
||||
def certeq_participant_step(hostseckey: bytes, idx: int, x: bytes) -> bytes: |
||||
msg = certeq_message(x, idx) |
||||
return schnorr_sign(msg, hostseckey, aux_rand=random_bytes(32)) |
||||
|
||||
|
||||
def certeq_cert_len(n: int) -> int: |
||||
return 64 * n |
||||
|
||||
|
||||
def certeq_verify(hostpubkeys: List[bytes], x: bytes, cert: bytes) -> None: |
||||
n = len(hostpubkeys) |
||||
if len(cert) != certeq_cert_len(n): |
||||
raise ValueError |
||||
for i in range(n): |
||||
msg = certeq_message(x, i) |
||||
valid = schnorr_verify( |
||||
msg, |
||||
hostpubkeys[i][1:33], |
||||
cert[i * 64 : (i + 1) * 64], |
||||
) |
||||
if not valid: |
||||
raise InvalidSignatureInCertificateError(i) |
||||
|
||||
|
||||
def certeq_coordinator_step(sigs: List[bytes]) -> bytes: |
||||
cert = b"".join(sigs) |
||||
return cert |
||||
|
||||
|
||||
class InvalidSignatureInCertificateError(ValueError): |
||||
def __init__(self, participant: int, *args: Any): |
||||
self.participant = participant |
||||
super().__init__(participant, *args) |
||||
|
||||
|
||||
### |
||||
### Host keys |
||||
### |
||||
|
||||
|
||||
def hostpubkey_gen(hostseckey: bytes) -> bytes: |
||||
"""Compute the participant's host public key from the host secret key. |
||||
|
||||
The host public key is the long-term cryptographic identity of the |
||||
participant. |
||||
|
||||
This function interprets `hostseckey` as big-endian integer, and computes |
||||
the corresponding "plain" public key in compressed serialization (33 bytes, |
||||
starting with 0x02 or 0x03). This is the key generation procedure |
||||
traditionally used in Bitcoin, e.g., for ECDSA. In other words, this |
||||
function is equivalent to `IndividualPubkey` as defined in |
||||
[[BIP 327](https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki#key-generation-of-an-individual-signer)]. |
||||
TODO Refer to the FROST signing BIP instead, once that one has a number. |
||||
|
||||
Arguments: |
||||
hostseckey: This participant's long-term secret key (32 bytes). |
||||
The key **must** be 32 bytes of cryptographically secure randomness |
||||
with sufficient entropy to be unpredictable. All outputs of a |
||||
successful participant in a session can be recovered from (a backup |
||||
of) the key and per-session recovery data. |
||||
|
||||
The same host secret key (and thus the same host public key) can be |
||||
used in multiple DKG sessions. A host public key can be correlated |
||||
to the threshold public key resulting from a DKG session only by |
||||
parties who observed the session, namely the participants, the |
||||
coordinator (and any eavesdropper). |
||||
|
||||
Returns: |
||||
The host public key (33 bytes). |
||||
|
||||
Raises: |
||||
HostSeckeyError: If the length of `hostseckey` is not 32 bytes. |
||||
""" |
||||
if len(hostseckey) != 32: |
||||
raise HostSeckeyError |
||||
|
||||
return pubkey_gen_plain(hostseckey) |
||||
|
||||
|
||||
class HostSeckeyError(ValueError): |
||||
"""Raised if the length of a host secret key is not 32 bytes.""" |
||||
|
||||
|
||||
### |
||||
### Session input and outputs |
||||
### |
||||
|
||||
|
||||
# It would be more idiomatic Python to make this a real (data)class, perform |
||||
# data validation in the constructor, and add methods to it, but let's stick to |
||||
# simple tuples in the public API in order to keep it approachable to readers |
||||
# who are not too familiar with Python. |
||||
class SessionParams(NamedTuple): |
||||
"""A `SessionParams` tuple holds the common parameters of a DKG session. |
||||
|
||||
Attributes: |
||||
hostpubkeys: Ordered list of the host public keys of all participants. |
||||
t: The participation threshold `t`. |
||||
This is the number of participants that will be required to sign. |
||||
It must hold that `1 <= t <= len(hostpubkeys) <= 2**32 - 1`. |
||||
|
||||
Participants **must** ensure that they have obtained authentic host |
||||
public keys of all the other participants in the session to make |
||||
sure that they run the DKG and generate a threshold public key with |
||||
the intended set of participants. This is analogous to traditional |
||||
threshold signatures (known as "multisig" in the Bitcoin community), |
||||
[[BIP 383](https://github.com/bitcoin/bips/blob/master/bip-0383.mediawiki)], |
||||
where the participants need to obtain authentic extended public keys |
||||
("xpubs") from the other participants to generate multisig |
||||
addresses, or MuSig2 |
||||
[[BIP 327](https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki)], |
||||
where the participants need to obtain authentic individual public |
||||
keys of the other participants to generate an aggregated public key. |
||||
|
||||
A DKG session will fail if the participants and the coordinator in a session |
||||
don't have the `hostpubkeys` in the same order. This will make sure that |
||||
honest participants agree on the order as part of the session, which is |
||||
useful if the order carries an implicit meaning in the application (e.g., if |
||||
the first `t` participants are the primary participants for signing and the |
||||
others are fallback participants). If there is no canonical order of the |
||||
participants in the application, the caller can sort the list of host public |
||||
keys with the [KeySort algorithm specified in |
||||
BIP 327](https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki#key-sorting) |
||||
to abstract away from the order. |
||||
""" |
||||
|
||||
hostpubkeys: List[bytes] |
||||
t: int |
||||
|
||||
|
||||
def params_validate(params: SessionParams) -> None: |
||||
(hostpubkeys, t) = params |
||||
|
||||
if not (1 <= t <= len(hostpubkeys) <= 2**32 - 1): |
||||
raise ThresholdOrCountError |
||||
|
||||
# Check that all hostpubkeys are valid |
||||
for i, hostpubkey in enumerate(hostpubkeys): |
||||
try: |
||||
_ = GE.from_bytes_compressed(hostpubkey) |
||||
except ValueError as e: |
||||
raise InvalidHostPubkeyError(i) from e |
||||
|
||||
# Check for duplicate hostpubkeys and find the corresponding indices |
||||
hostpubkey_to_idx: Dict[bytes, int] = dict() |
||||
for i, hostpubkey in enumerate(hostpubkeys): |
||||
if hostpubkey in hostpubkey_to_idx: |
||||
raise DuplicateHostPubkeyError(hostpubkey_to_idx[hostpubkey], i) |
||||
hostpubkey_to_idx[hostpubkey] = i |
||||
|
||||
|
||||
def params_id(params: SessionParams) -> bytes: |
||||
"""Return the parameters ID, a unique representation of the `SessionParams`. |
||||
|
||||
In the common scenario that the participants obtain host public keys from |
||||
the other participants over channels that do not provide end-to-end |
||||
authentication of the sending participant (e.g., if the participants simply |
||||
send their unauthenticated host public keys to the coordinator, who is |
||||
supposed to relay them to all participants), the parameters ID serves as a |
||||
convenient way to perform an out-of-band comparison of all host public keys. |
||||
It is a collision-resistant cryptographic hash of the `SessionParams` |
||||
tuple. As a result, if all participants have obtained an identical |
||||
parameters ID (as can be verified out-of-band), then they all agree on all |
||||
host public keys and the threshold `t`, and in particular, all participants |
||||
have obtained authentic public host keys. |
||||
|
||||
Returns: |
||||
bytes: The parameters ID, a 32-byte string. |
||||
|
||||
Raises: |
||||
InvalidHostPubkeyError: If `hostpubkeys` contains an invalid public key. |
||||
DuplicateHostPubkeyError: If `hostpubkeys` contains duplicates. |
||||
ThresholdOrCountError: If `1 <= t <= len(hostpubkeys) <= 2**32 - 1` does |
||||
not hold. |
||||
""" |
||||
params_validate(params) |
||||
hostpubkeys, t = params |
||||
|
||||
t_bytes = t.to_bytes(4, byteorder="big") |
||||
params_id = tagged_hash_bip_dkg( |
||||
"params_id", |
||||
t_bytes + b"".join(hostpubkeys), |
||||
) |
||||
assert len(params_id) == 32 |
||||
return params_id |
||||
|
||||
|
||||
class SessionParamsError(ValueError): |
||||
"""Base exception for invalid `SessionParams` tuples.""" |
||||
|
||||
|
||||
class DuplicateHostPubkeyError(SessionParamsError): |
||||
"""Raised if two participants have identical host public keys. |
||||
|
||||
This exception is raised when two participants have an identical host public |
||||
key in the `SessionParams` tuple. Assuming the host public keys in question |
||||
have been transmitted correctly, this exception implies that at least one of |
||||
the two participants is faulty (because duplicates occur only with |
||||
negligible probability if keys are generated honestly). |
||||
|
||||
Attributes: |
||||
participant1 (int): Index of the first participant. |
||||
participant2 (int): Index of the second participant. |
||||
""" |
||||
|
||||
def __init__(self, participant1: int, participant2: int, *args: Any): |
||||
self.participant1 = participant1 |
||||
self.participant2 = participant2 |
||||
super().__init__(participant1, participant2, *args) |
||||
|
||||
|
||||
class InvalidHostPubkeyError(SessionParamsError): |
||||
"""Raised if a host public key is invalid. |
||||
|
||||
This exception is raised when a host public key in the `SessionParams` tuple |
||||
is not a valid public key in compressed serialization. Assuming the host |
||||
public keys in question has been transmitted correctly, this exception |
||||
implies that the corresponding participant is faulty. |
||||
|
||||
Attributes: |
||||
participant (int): Index of the participant. |
||||
""" |
||||
|
||||
def __init__(self, participant: int, *args: Any): |
||||
self.participant = participant |
||||
super().__init__(participant, *args) |
||||
|
||||
|
||||
class ThresholdOrCountError(SessionParamsError): |
||||
"""Raised if `1 <= t <= len(hostpubkeys) <= 2**32 - 1` does not hold.""" |
||||
|
||||
|
||||
# This is really the same definition as in simplpedpop and encpedpop. We repeat |
||||
# it here only to have its docstring in this module. |
||||
class DKGOutput(NamedTuple): |
||||
"""Holds the outputs of a DKG session. |
||||
|
||||
Attributes: |
||||
secshare: Secret share of the participant (or `None` for coordinator) |
||||
threshold_pubkey: Generated threshold public key representing the group |
||||
pubshares: Public shares of the participants |
||||
""" |
||||
|
||||
secshare: Optional[bytes] |
||||
threshold_pubkey: bytes |
||||
pubshares: List[bytes] |
||||
|
||||
|
||||
RecoveryData = NewType("RecoveryData", bytes) |
||||
|
||||
|
||||
### |
||||
### Messages |
||||
### |
||||
|
||||
|
||||
class ParticipantMsg1(NamedTuple): |
||||
enc_pmsg: encpedpop.ParticipantMsg |
||||
|
||||
|
||||
class ParticipantMsg2(NamedTuple): |
||||
sig: bytes |
||||
|
||||
|
||||
class CoordinatorMsg1(NamedTuple): |
||||
enc_cmsg: encpedpop.CoordinatorMsg |
||||
enc_secshares: List[Scalar] |
||||
|
||||
|
||||
class CoordinatorMsg2(NamedTuple): |
||||
cert: bytes |
||||
|
||||
|
||||
class CoordinatorInvestigationMsg(NamedTuple): |
||||
enc_cinv: encpedpop.CoordinatorInvestigationMsg |
||||
|
||||
|
||||
def deserialize_recovery_data( |
||||
b: bytes, |
||||
) -> Tuple[int, VSSCommitment, List[bytes], List[bytes], List[Scalar], bytes]: |
||||
rest = b |
||||
|
||||
# Read t (4 bytes) |
||||
if len(rest) < 4: |
||||
raise ValueError |
||||
t, rest = int.from_bytes(rest[:4], byteorder="big"), rest[4:] |
||||
|
||||
# Read sum_coms (33*t bytes) |
||||
if len(rest) < 33 * t: |
||||
raise ValueError |
||||
sum_coms, rest = ( |
||||
VSSCommitment.from_bytes_and_t(rest[: 33 * t], t), |
||||
rest[33 * t :], |
||||
) |
||||
|
||||
# Compute n |
||||
n, remainder = divmod(len(rest), (33 + 33 + 32 + 64)) |
||||
if remainder != 0: |
||||
raise ValueError |
||||
|
||||
# Read hostpubkeys (33*n bytes) |
||||
if len(rest) < 33 * n: |
||||
raise ValueError |
||||
hostpubkeys, rest = [rest[i : i + 33] for i in range(0, 33 * n, 33)], rest[33 * n :] |
||||
|
||||
# Read pubnonces (33*n bytes) |
||||
if len(rest) < 33 * n: |
||||
raise ValueError |
||||
pubnonces, rest = [rest[i : i + 33] for i in range(0, 33 * n, 33)], rest[33 * n :] |
||||
|
||||
# Read enc_secshares (32*n bytes) |
||||
if len(rest) < 32 * n: |
||||
raise ValueError |
||||
enc_secshares, rest = ( |
||||
[Scalar(int_from_bytes(rest[i : i + 32])) for i in range(0, 32 * n, 32)], |
||||
rest[32 * n :], |
||||
) |
||||
|
||||
# Read cert |
||||
cert_len = certeq_cert_len(n) |
||||
if len(rest) < cert_len: |
||||
raise ValueError |
||||
cert, rest = rest[:cert_len], rest[cert_len:] |
||||
|
||||
if len(rest) != 0: |
||||
raise ValueError |
||||
return (t, sum_coms, hostpubkeys, pubnonces, enc_secshares, cert) |
||||
|
||||
|
||||
### |
||||
### Participant |
||||
### |
||||
|
||||
|
||||
class ParticipantState1(NamedTuple): |
||||
params: SessionParams |
||||
idx: int |
||||
enc_state: encpedpop.ParticipantState |
||||
|
||||
|
||||
class ParticipantState2(NamedTuple): |
||||
params: SessionParams |
||||
eq_input: bytes |
||||
dkg_output: DKGOutput |
||||
|
||||
|
||||
def participant_step1( |
||||
hostseckey: bytes, params: SessionParams, random: bytes |
||||
) -> Tuple[ParticipantState1, ParticipantMsg1]: |
||||
"""Perform a participant's first step of a ChillDKG session. |
||||
|
||||
Arguments: |
||||
hostseckey: Participant's long-term host secret key (32 bytes). |
||||
params: Common session parameters. |
||||
random: FRESH random byte string (32 bytes). |
||||
|
||||
Returns: |
||||
ParticipantState1: The participant's session state after this step, to |
||||
be passed as an argument to `participant_step2`. The state **must |
||||
not** be reused (i.e., it must be passed only to one |
||||
`participant_step2` call). |
||||
ParticipantMsg1: The first message to be sent to the coordinator. |
||||
|
||||
Raises: |
||||
HostSeckeyError: If the length of `hostseckey` is not 32 bytes or if |
||||
`hostseckey` does not match any entry of `hostpubkeys`. |
||||
InvalidHostPubkeyError: If `hostpubkeys` contains an invalid public key. |
||||
DuplicateHostPubkeyError: If `hostpubkeys` contains duplicates. |
||||
ThresholdOrCountError: If `1 <= t <= len(hostpubkeys) <= 2**32 - 1` does |
||||
not hold. |
||||
""" |
||||
hostpubkey = hostpubkey_gen(hostseckey) # HostSeckeyError if len(hostseckey) != 32 |
||||
|
||||
params_validate(params) |
||||
(hostpubkeys, t) = params |
||||
|
||||
try: |
||||
idx = hostpubkeys.index(hostpubkey) |
||||
except ValueError as e: |
||||
raise HostSeckeyError( |
||||
"Host secret key does not match any host public key" |
||||
) from e |
||||
enc_state, enc_pmsg = encpedpop.participant_step1( |
||||
# We know that EncPedPop uses its seed only by feeding it to a hash |
||||
# function. Thus, it is sufficient that the seed has a high entropy, |
||||
# and so we can simply pass the hostseckey as seed. |
||||
seed=hostseckey, |
||||
deckey=hostseckey, |
||||
t=t, |
||||
# This requires the joint security of Schnorr signatures and ECDH. |
||||
enckeys=hostpubkeys, |
||||
idx=idx, |
||||
random=random, |
||||
) # HostSeckeyError if len(hostseckey) != 32 |
||||
state1 = ParticipantState1(params, idx, enc_state) |
||||
return state1, ParticipantMsg1(enc_pmsg) |
||||
|
||||
|
||||
def participant_step2( |
||||
hostseckey: bytes, |
||||
state1: ParticipantState1, |
||||
cmsg1: CoordinatorMsg1, |
||||
) -> Tuple[ParticipantState2, ParticipantMsg2]: |
||||
"""Perform a participant's second step of a ChillDKG session. |
||||
|
||||
**Warning:** |
||||
After sending the returned message to the coordinator, this participant |
||||
**must not** erase the hostseckey, even if this participant does not receive |
||||
the coordinator reply needed for the `participant_finalize` call. The |
||||
underlying reason is that some other participant may receive the coordinator |
||||
reply, deem the DKG session successful and use the resulting threshold |
||||
public key (e.g., by sending funds to it). If the coordinator reply remains |
||||
missing, that other participant can, at any point in the future, convince |
||||
this participant of the success of the DKG session by presenting recovery |
||||
data, from which this participant can recover the DKG output using the |
||||
`recover` function. |
||||
|
||||
Arguments: |
||||
hostseckey: Participant's long-term host secret key (32 bytes). |
||||
state1: The participant's session state as output by |
||||
`participant_step1`. |
||||
cmsg1: The first message received from the coordinator. |
||||
|
||||
Returns: |
||||
ParticipantState2: The participant's session state after this step, to |
||||
be passed as an argument to `participant_finalize`. The state **must |
||||
not** be reused (i.e., it must be passed only to one |
||||
`participant_finalize` call). |
||||
ParticipantMsg2: The second message to be sent to the coordinator. |
||||
|
||||
Raises: |
||||
HostSeckeyError: If the length of `hostseckey` is not 32 bytes. |
||||
FaultyParticipantOrCoordinatorError: If another known participant or the |
||||
coordinator is faulty. See the documentation of the exception for |
||||
further details. |
||||
UnknownFaultyParticipantOrCoordinatorError: If another unknown |
||||
participant or the coordinator is faulty, but running the optional |
||||
investigation procedure of the protocol is necessary to determine a |
||||
suspected participant. See the documentation of the exception for |
||||
further details. |
||||
""" |
||||
params, idx, enc_state = state1 |
||||
enc_cmsg, enc_secshares = cmsg1 |
||||
|
||||
enc_dkg_output, eq_input = encpedpop.participant_step2( |
||||
state=enc_state, |
||||
deckey=hostseckey, |
||||
cmsg=enc_cmsg, |
||||
enc_secshare=enc_secshares[idx], |
||||
) |
||||
|
||||
# Include the enc_shares in eq_input to ensure that participants agree on |
||||
# all shares, which in turn ensures that they have the right recovery data. |
||||
eq_input += b"".join([bytes_from_int(int(share)) for share in enc_secshares]) |
||||
dkg_output = DKGOutput._make(enc_dkg_output) |
||||
state2 = ParticipantState2(params, eq_input, dkg_output) |
||||
sig = certeq_participant_step(hostseckey, idx, eq_input) |
||||
pmsg2 = ParticipantMsg2(sig) |
||||
return state2, pmsg2 |
||||
|
||||
|
||||
def participant_finalize( |
||||
state2: ParticipantState2, cmsg2: CoordinatorMsg2 |
||||
) -> Tuple[DKGOutput, RecoveryData]: |
||||
"""Perform a participant's final step of a ChillDKG session. |
||||
|
||||
If this function returns properly (without an exception), then this |
||||
participant deems the DKG session successful. It is, however, possible that |
||||
other participants have received a `cmsg2` from the coordinator that made |
||||
them raise an exception instead, or that they have not received a `cmsg2` |
||||
from the coordinator at all. These participants can, at any point in time in |
||||
the future (e.g., when initiating a signing session), be convinced to deem |
||||
the session successful by presenting the recovery data to them, from which |
||||
they can recover the DKG outputs using the `recover` function. |
||||
|
||||
**Warning:** |
||||
Changing perspectives, this implies that, even when obtaining an exception, |
||||
this participant **must not** conclude that the DKG session has failed, and |
||||
as a consequence, this particiant **must not** erase the hostseckey. The |
||||
underlying reason is that some other participant may deem the DKG session |
||||
successful and use the resulting threshold public key (e.g., by sending |
||||
funds to it). That other participant can, at any point in the future, |
||||
convince this participant of the success of the DKG session by presenting |
||||
recovery data to this participant. |
||||
|
||||
Arguments: |
||||
state2: The participant's state as output by `participant_step2`. |
||||
|
||||
Returns: |
||||
DKGOutput: The DKG output. |
||||
bytes: The serialized recovery data. |
||||
|
||||
Raises: |
||||
FaultyParticipantOrCoordinatorError: If another known participant or the |
||||
coordinator is faulty. Make sure to read the above warning, and see |
||||
the documentation of the exception for further details. |
||||
FaultyCoordinatorError: If the coordinator is faulty. Make sure to read |
||||
the above warning, and see the documentation of the exception for |
||||
further details. |
||||
""" |
||||
params, eq_input, dkg_output = state2 |
||||
try: |
||||
certeq_verify(params.hostpubkeys, eq_input, cmsg2.cert) |
||||
except InvalidSignatureInCertificateError as e: |
||||
raise FaultyParticipantOrCoordinatorError( |
||||
e.participant, |
||||
"Participant has provided an invalid signature for the certificate", |
||||
) from e |
||||
return dkg_output, RecoveryData(eq_input + cmsg2.cert) |
||||
|
||||
|
||||
def participant_investigate( |
||||
error: UnknownFaultyParticipantOrCoordinatorError, |
||||
cinv: CoordinatorInvestigationMsg, |
||||
) -> NoReturn: |
||||
"""Investigate who is to blame for a failed ChillDKG session. |
||||
|
||||
This function can optionally be called when `participant_step2` raises |
||||
`UnknownFaultyParticipantOrCoordinatorError`. It narrows down the suspected |
||||
faulty parties by analyzing the investigation message provided by the coordinator. |
||||
|
||||
This function does not return normally. Instead, it raises one of two |
||||
exceptions. |
||||
|
||||
Arguments: |
||||
error: `UnknownFaultyParticipantOrCoordinatorError` raised by |
||||
`participant_step2`. |
||||
cinv: Coordinator investigation message for this participant as output |
||||
by `coordinator_investigate`. |
||||
|
||||
Raises: |
||||
FaultyParticipantOrCoordinatorError: If another known participant or the |
||||
coordinator is faulty. See the documentation of the exception for |
||||
further details. |
||||
FaultyCoordinatorError: If the coordinator is faulty. See the |
||||
documentation of the exception for further details. |
||||
""" |
||||
assert isinstance(error.inv_data, encpedpop.ParticipantInvestigationData) |
||||
encpedpop.participant_investigate( |
||||
error=error, |
||||
cinv=cinv.enc_cinv, |
||||
) |
||||
|
||||
|
||||
### |
||||
### Coordinator |
||||
### |
||||
|
||||
|
||||
class CoordinatorState(NamedTuple): |
||||
params: SessionParams |
||||
eq_input: bytes |
||||
dkg_output: DKGOutput |
||||
|
||||
|
||||
def coordinator_step1( |
||||
pmsgs1: List[ParticipantMsg1], params: SessionParams |
||||
) -> Tuple[CoordinatorState, CoordinatorMsg1]: |
||||
"""Perform the coordinator's first step of a ChillDKG session. |
||||
|
||||
Arguments: |
||||
pmsgs1: List of first messages received from the participants. |
||||
params: Common session parameters. |
||||
|
||||
Returns: |
||||
CoordinatorState: The coordinator's session state after this step, to be |
||||
passed as an argument to `coordinator_finalize`. The state is not |
||||
supposed to be reused (i.e., it should be passed only to one |
||||
`coordinator_finalize` call). |
||||
CoordinatorMsg1: The first message to be sent to all participants. |
||||
|
||||
Raises: |
||||
InvalidHostPubkeyError: If `hostpubkeys` contains an invalid public key. |
||||
DuplicateHostPubkeyError: If `hostpubkeys` contains duplicates. |
||||
ThresholdOrCountError: If `1 <= t <= len(hostpubkeys) <= 2**32 - 1` does |
||||
not hold. |
||||
""" |
||||
params_validate(params) |
||||
hostpubkeys, t = params |
||||
|
||||
enc_cmsg, enc_dkg_output, eq_input, enc_secshares = encpedpop.coordinator_step( |
||||
pmsgs=[pmsg1.enc_pmsg for pmsg1 in pmsgs1], |
||||
t=t, |
||||
enckeys=hostpubkeys, |
||||
) |
||||
eq_input += b"".join([bytes_from_int(int(share)) for share in enc_secshares]) |
||||
dkg_output = DKGOutput._make(enc_dkg_output) # Convert to chilldkg.DKGOutput type |
||||
state = CoordinatorState(params, eq_input, dkg_output) |
||||
cmsg1 = CoordinatorMsg1(enc_cmsg, enc_secshares) |
||||
return state, cmsg1 |
||||
|
||||
|
||||
def coordinator_finalize( |
||||
state: CoordinatorState, pmsgs2: List[ParticipantMsg2] |
||||
) -> Tuple[CoordinatorMsg2, DKGOutput, RecoveryData]: |
||||
"""Perform the coordinator's final step of a ChillDKG session. |
||||
|
||||
If this function returns properly (without an exception), then the |
||||
coordinator deems the DKG session successful. The returned `CoordinatorMsg2` |
||||
is supposed to be sent to all participants, who are supposed to pass it as |
||||
input to the `participant_finalize` function. It is, however, possible that |
||||
some participants pass a wrong and invalid message to `participant_finalize` |
||||
(e.g., because the message is transmitted incorrectly). These participants |
||||
can, at any point in time in the future (e.g., when initiating a signing |
||||
session), be convinced to deem the session successful by presenting the |
||||
recovery data to them, from which they can recover the DKG outputs using the |
||||
`recover` function. |
||||
|
||||
If this function raises an exception, then the DKG session was not |
||||
successful from the perspective of the coordinator. In this case, it is, in |
||||
principle, possible to recover the DKG outputs of the coordinator using the |
||||
recovery data from a successful participant, should one exist. Any such |
||||
successful participant is either faulty, or has received messages from |
||||
other participants via a communication channel beside the coordinator. |
||||
|
||||
Arguments: |
||||
state: The coordinator's session state as output by `coordinator_step1`. |
||||
pmsgs2: List of second messages received from the participants. |
||||
|
||||
Returns: |
||||
CoordinatorMsg2: The second message to be sent to all participants. |
||||
DKGOutput: The DKG output. Since the coordinator does not have a secret |
||||
share, the DKG output will have the `secshare` field set to `None`. |
||||
bytes: The serialized recovery data. |
||||
|
||||
Raises: |
||||
FaultyParticipantError: If another known participant or the coordinator |
||||
is faulty. See the documentation of the exception for further |
||||
details. |
||||
""" |
||||
params, eq_input, dkg_output = state |
||||
cert = certeq_coordinator_step([pmsg2.sig for pmsg2 in pmsgs2]) |
||||
try: |
||||
certeq_verify(params.hostpubkeys, eq_input, cert) |
||||
except InvalidSignatureInCertificateError as e: |
||||
raise FaultyParticipantError( |
||||
e.participant, |
||||
"Participant has provided an invalid signature for the certificate", |
||||
) from e |
||||
return CoordinatorMsg2(cert), dkg_output, RecoveryData(eq_input + cert) |
||||
|
||||
|
||||
def coordinator_investigate( |
||||
pmsgs: List[ParticipantMsg1], |
||||
) -> List[CoordinatorInvestigationMsg]: |
||||
"""Generate investigation messages for a ChillDKG session. |
||||
|
||||
The investigation messages will allow the participants to investigate who is |
||||
to blame for a failed ChillDKG session (see `participant_investigate`). |
||||
|
||||
Each message is intended for a single participant but can be safely |
||||
broadcast to all participants because the messages contain no confidential |
||||
information. |
||||
|
||||
Arguments: |
||||
pmsgs: List of first messages received from the participants. |
||||
|
||||
Returns: |
||||
List[CoordinatorInvestigationMsg]: A list of investigation messages, each |
||||
intended for a single participant. |
||||
""" |
||||
enc_cinvs = encpedpop.coordinator_investigate([pmsg.enc_pmsg for pmsg in pmsgs]) |
||||
return [CoordinatorInvestigationMsg(enc_cinv) for enc_cinv in enc_cinvs] |
||||
|
||||
|
||||
### |
||||
### Recovery |
||||
### |
||||
|
||||
|
||||
def recover( |
||||
hostseckey: Optional[bytes], recovery_data: RecoveryData |
||||
) -> Tuple[DKGOutput, SessionParams]: |
||||
"""Recover the DKG output of a ChillDKG session. |
||||
|
||||
This function serves two different purposes: |
||||
1. To recover from an exception in `participant_finalize` or |
||||
`coordinator_finalize`, after obtaining the recovery data from another |
||||
participant or the coordinator. See `participant_finalize` and |
||||
`coordinator_finalize` for background. |
||||
2. To reproduce the DKG outputs on a new device, e.g., to recover from a |
||||
backup after data loss. |
||||
|
||||
Arguments: |
||||
hostseckey: This participant's long-term host secret key (32 bytes) or |
||||
`None` if recovering the coordinator. |
||||
recovery_data: Recovery data from a successful session. |
||||
|
||||
Returns: |
||||
DKGOutput: The recovered DKG output. |
||||
SessionParams: The common parameters of the recovered session. |
||||
|
||||
Raises: |
||||
HostSeckeyError: If the length of `hostseckey` is not 32 bytes or if |
||||
`hostseckey` does not match the recovery data. (This can also |
||||
occur if the recovery data is invalid.) |
||||
RecoveryDataError: If recovery failed due to invalid recovery data. |
||||
""" |
||||
try: |
||||
(t, sum_coms, hostpubkeys, pubnonces, enc_secshares, cert) = ( |
||||
deserialize_recovery_data(recovery_data) |
||||
) |
||||
except Exception as e: |
||||
raise RecoveryDataError("Failed to deserialize recovery data") from e |
||||
|
||||
n = len(hostpubkeys) |
||||
params = SessionParams(hostpubkeys, t) |
||||
try: |
||||
params_validate(params) |
||||
except SessionParamsError as e: |
||||
raise RecoveryDataError("Invalid session parameters in recovery data") from e |
||||
|
||||
# Verify cert |
||||
eq_input = recovery_data[: -len(cert)] |
||||
try: |
||||
certeq_verify(hostpubkeys, eq_input, cert) |
||||
except InvalidSignatureInCertificateError as e: |
||||
raise RecoveryDataError("Invalid certificate in recovery data") from e |
||||
|
||||
# Compute threshold pubkey and individual pubshares |
||||
sum_coms, tweak, _ = sum_coms.invalid_taproot_commit() |
||||
threshold_pubkey = sum_coms.commitment_to_secret() |
||||
pubshares = [sum_coms.pubshare(i) for i in range(n)] |
||||
|
||||
if hostseckey: |
||||
hostpubkey = hostpubkey_gen(hostseckey) # HostSeckeyError |
||||
try: |
||||
idx = hostpubkeys.index(hostpubkey) |
||||
except ValueError as e: |
||||
raise HostSeckeyError( |
||||
"Host secret key does not match any host public key in the recovery data" |
||||
) from e |
||||
|
||||
# Decrypt share |
||||
enc_context = encpedpop.serialize_enc_context(t, hostpubkeys) |
||||
secshare = encpedpop.decrypt_sum( |
||||
hostseckey, |
||||
hostpubkeys[idx], |
||||
pubnonces, |
||||
enc_context, |
||||
idx, |
||||
enc_secshares[idx], |
||||
) |
||||
secshare_tweaked = secshare + tweak |
||||
|
||||
# This is just a sanity check. Our signature is valid, so we have done |
||||
# an equivalent check already during the actual session. |
||||
assert VSSCommitment.verify_secshare(secshare_tweaked, pubshares[idx]) |
||||
else: |
||||
secshare_tweaked = None |
||||
|
||||
dkg_output = DKGOutput( |
||||
None if secshare_tweaked is None else secshare_tweaked.to_bytes(), |
||||
threshold_pubkey.to_bytes_compressed(), |
||||
[pubshare.to_bytes_compressed() for pubshare in pubshares], |
||||
) |
||||
return dkg_output, params |
||||
|
||||
|
||||
class RecoveryDataError(ValueError): |
||||
"""Raised if the recovery data is invalid.""" |
||||
@ -0,0 +1,336 @@
|
||||
from typing import Tuple, List, NamedTuple, NoReturn |
||||
|
||||
from ..secp256k1proto.secp256k1 import Scalar, GE |
||||
from ..secp256k1proto.ecdh import ecdh_libsecp256k1 |
||||
from ..secp256k1proto.keys import pubkey_gen_plain |
||||
from ..secp256k1proto.util import int_from_bytes |
||||
|
||||
from . import simplpedpop |
||||
from .util import ( |
||||
UnknownFaultyParticipantOrCoordinatorError, |
||||
tagged_hash_bip_dkg, |
||||
FaultyParticipantOrCoordinatorError, |
||||
FaultyCoordinatorError, |
||||
) |
||||
|
||||
|
||||
### |
||||
### Encryption |
||||
### |
||||
|
||||
|
||||
def ecdh( |
||||
seckey: bytes, my_pubkey: bytes, their_pubkey: bytes, context: bytes, sending: bool |
||||
) -> Scalar: |
||||
data = ecdh_libsecp256k1(seckey, their_pubkey) |
||||
if sending: |
||||
data += my_pubkey + their_pubkey |
||||
else: |
||||
data += their_pubkey + my_pubkey |
||||
assert len(data) == 32 + 2 * 33 |
||||
data += context |
||||
return Scalar(int_from_bytes(tagged_hash_bip_dkg("encpedpop ecdh", data))) |
||||
|
||||
|
||||
def self_pad(symkey: bytes, nonce: bytes, context: bytes) -> Scalar: |
||||
# Pad for symmetric encryption to ourselves |
||||
return Scalar( |
||||
int_from_bytes( |
||||
tagged_hash_bip_dkg("encaps_multi self_pad", symkey + nonce + context) |
||||
) |
||||
) |
||||
|
||||
|
||||
def encaps_multi( |
||||
secnonce: bytes, |
||||
pubnonce: bytes, |
||||
deckey: bytes, |
||||
enckeys: List[bytes], |
||||
context: bytes, |
||||
idx: int, |
||||
) -> List[Scalar]: |
||||
# This is effectively the "Hashed ElGamal" multi-recipient KEM described in |
||||
# Section 5 of "Multi-recipient encryption, revisited" by Alexandre Pinto, |
||||
# Bertram Poettering, Jacob C. N. Schuldt (AsiaCCS 2014). Its crucial |
||||
# feature is to feed the index of the enckey to the hash function. The only |
||||
# difference is that we feed also the pubnonce and context data into the |
||||
# hash function. |
||||
pads = [] |
||||
for i, enckey in enumerate(enckeys): |
||||
context_ = i.to_bytes(4, byteorder="big") + context |
||||
if i == idx: |
||||
# We're encrypting to ourselves, so we use a symmetrically derived |
||||
# pad to save the ECDH computation. |
||||
pad = self_pad(symkey=deckey, nonce=pubnonce, context=context_) |
||||
else: |
||||
pad = ecdh( |
||||
seckey=secnonce, |
||||
my_pubkey=pubnonce, |
||||
their_pubkey=enckey, |
||||
context=context_, |
||||
sending=True, |
||||
) |
||||
pads.append(pad) |
||||
return pads |
||||
|
||||
|
||||
def encrypt_multi( |
||||
secnonce: bytes, |
||||
pubnonce: bytes, |
||||
deckey: bytes, |
||||
enckeys: List[bytes], |
||||
context: bytes, |
||||
idx: int, |
||||
plaintexts: List[Scalar], |
||||
) -> List[Scalar]: |
||||
pads = encaps_multi(secnonce, pubnonce, deckey, enckeys, context, idx) |
||||
assert len(plaintexts) == len(pads) |
||||
ciphertexts = [plaintext + pad for plaintext, pad in zip(plaintexts, pads)] |
||||
return ciphertexts |
||||
|
||||
|
||||
def decaps_multi( |
||||
deckey: bytes, |
||||
enckey: bytes, |
||||
pubnonces: List[bytes], |
||||
context: bytes, |
||||
idx: int, |
||||
) -> List[Scalar]: |
||||
context_ = idx.to_bytes(4, byteorder="big") + context |
||||
pads = [] |
||||
for sender_idx, pubnonce in enumerate(pubnonces): |
||||
if sender_idx == idx: |
||||
pad = self_pad(symkey=deckey, nonce=pubnonce, context=context_) |
||||
else: |
||||
pad = ecdh( |
||||
seckey=deckey, |
||||
my_pubkey=enckey, |
||||
their_pubkey=pubnonce, |
||||
context=context_, |
||||
sending=False, |
||||
) |
||||
pads.append(pad) |
||||
return pads |
||||
|
||||
|
||||
def decrypt_sum( |
||||
deckey: bytes, |
||||
enckey: bytes, |
||||
pubnonces: List[bytes], |
||||
context: bytes, |
||||
idx: int, |
||||
sum_ciphertexts: Scalar, |
||||
) -> Scalar: |
||||
if idx >= len(pubnonces): |
||||
raise IndexError |
||||
pads = decaps_multi(deckey, enckey, pubnonces, context, idx) |
||||
sum_plaintexts: Scalar = sum_ciphertexts - Scalar.sum(*pads) |
||||
return sum_plaintexts |
||||
|
||||
|
||||
### |
||||
### Messages |
||||
### |
||||
|
||||
|
||||
class ParticipantMsg(NamedTuple): |
||||
simpl_pmsg: simplpedpop.ParticipantMsg |
||||
pubnonce: bytes |
||||
enc_shares: List[Scalar] |
||||
|
||||
|
||||
class CoordinatorMsg(NamedTuple): |
||||
simpl_cmsg: simplpedpop.CoordinatorMsg |
||||
pubnonces: List[bytes] |
||||
|
||||
|
||||
class CoordinatorInvestigationMsg(NamedTuple): |
||||
enc_partial_secshares: List[Scalar] |
||||
partial_pubshares: List[GE] |
||||
|
||||
|
||||
### |
||||
### Participant |
||||
### |
||||
|
||||
|
||||
class ParticipantState(NamedTuple): |
||||
simpl_state: simplpedpop.ParticipantState |
||||
pubnonce: bytes |
||||
enckeys: List[bytes] |
||||
idx: int |
||||
|
||||
|
||||
class ParticipantInvestigationData(NamedTuple): |
||||
simpl_bstate: simplpedpop.ParticipantInvestigationData |
||||
enc_secshare: Scalar |
||||
pads: List[Scalar] |
||||
|
||||
|
||||
def serialize_enc_context(t: int, enckeys: List[bytes]) -> bytes: |
||||
return t.to_bytes(4, byteorder="big") + b"".join(enckeys) |
||||
|
||||
|
||||
def participant_step1( |
||||
seed: bytes, |
||||
deckey: bytes, |
||||
enckeys: List[bytes], |
||||
t: int, |
||||
idx: int, |
||||
random: bytes, |
||||
) -> Tuple[ParticipantState, ParticipantMsg]: |
||||
assert t < 2 ** (4 * 8) |
||||
assert len(random) == 32 |
||||
n = len(enckeys) |
||||
|
||||
# Derive an encryption nonce and a seed for SimplPedPop. |
||||
# |
||||
# SimplPedPop will use its seed to derive the secret shares, which we will |
||||
# encrypt using the encryption nonce. That means that all entropy used in |
||||
# the derivation of simpl_seed should also be in the derivation of the |
||||
# pubnonce, to ensure that we never encrypt different secret shares with the |
||||
# same encryption pads. The foolproof way to achieve this is to simply |
||||
# derive the nonce from simpl_seed. |
||||
enc_context = serialize_enc_context(t, enckeys) |
||||
simpl_seed = tagged_hash_bip_dkg("encpedpop seed", seed + random + enc_context) |
||||
secnonce = tagged_hash_bip_dkg("encpedpop secnonce", simpl_seed) |
||||
pubnonce = pubkey_gen_plain(secnonce) |
||||
|
||||
simpl_state, simpl_pmsg, shares = simplpedpop.participant_step1( |
||||
simpl_seed, t, n, idx |
||||
) |
||||
assert len(shares) == n |
||||
|
||||
enc_shares = encrypt_multi( |
||||
secnonce, pubnonce, deckey, enckeys, enc_context, idx, shares |
||||
) |
||||
|
||||
pmsg = ParticipantMsg(simpl_pmsg, pubnonce, enc_shares) |
||||
state = ParticipantState(simpl_state, pubnonce, enckeys, idx) |
||||
return state, pmsg |
||||
|
||||
|
||||
def participant_step2( |
||||
state: ParticipantState, |
||||
deckey: bytes, |
||||
cmsg: CoordinatorMsg, |
||||
enc_secshare: Scalar, |
||||
) -> Tuple[simplpedpop.DKGOutput, bytes]: |
||||
simpl_state, pubnonce, enckeys, idx = state |
||||
simpl_cmsg, pubnonces = cmsg |
||||
|
||||
reported_pubnonce = pubnonces[idx] |
||||
if reported_pubnonce != pubnonce: |
||||
raise FaultyCoordinatorError("Coordinator replied with wrong pubnonce") |
||||
|
||||
enc_context = serialize_enc_context(simpl_state.t, enckeys) |
||||
pads = decaps_multi(deckey, enckeys[idx], pubnonces, enc_context, idx) |
||||
secshare = enc_secshare - Scalar.sum(*pads) |
||||
|
||||
try: |
||||
dkg_output, eq_input = simplpedpop.participant_step2( |
||||
simpl_state, simpl_cmsg, secshare |
||||
) |
||||
except UnknownFaultyParticipantOrCoordinatorError as e: |
||||
assert isinstance(e.inv_data, simplpedpop.ParticipantInvestigationData) |
||||
# Translate simplpedpop.ParticipantInvestigationData into our own |
||||
# encpedpop.ParticipantInvestigationData. |
||||
inv_data = ParticipantInvestigationData(e.inv_data, enc_secshare, pads) |
||||
raise UnknownFaultyParticipantOrCoordinatorError(inv_data, e.args) from e |
||||
|
||||
eq_input += b"".join(enckeys) + b"".join(pubnonces) |
||||
return dkg_output, eq_input |
||||
|
||||
|
||||
def participant_investigate( |
||||
error: UnknownFaultyParticipantOrCoordinatorError, |
||||
cinv: CoordinatorInvestigationMsg, |
||||
) -> NoReturn: |
||||
simpl_inv_data, enc_secshare, pads = error.inv_data |
||||
enc_partial_secshares, partial_pubshares = cinv |
||||
assert len(enc_partial_secshares) == len(pads) |
||||
partial_secshares = [ |
||||
enc_partial_secshare - pad |
||||
for enc_partial_secshare, pad in zip(enc_partial_secshares, pads) |
||||
] |
||||
|
||||
simpl_cinv = simplpedpop.CoordinatorInvestigationMsg(partial_pubshares) |
||||
try: |
||||
simplpedpop.participant_investigate( |
||||
UnknownFaultyParticipantOrCoordinatorError(simpl_inv_data), |
||||
simpl_cinv, |
||||
partial_secshares, |
||||
) |
||||
except simplpedpop.SecshareSumError as e: |
||||
# The secshare is not equal to the sum of the partial secshares in the |
||||
# investigation message. Since the encryption is additively homomorphic, |
||||
# this can only happen if the sum of the *encrypted* secshare is not |
||||
# equal to the sum of the encrypted partial sechares, which is the |
||||
# coordinator's fault. |
||||
assert Scalar.sum(*enc_partial_secshares) != enc_secshare |
||||
raise FaultyCoordinatorError( |
||||
"Sum of encrypted partial secshares not equal to encrypted secshare" |
||||
) from e |
||||
|
||||
|
||||
### |
||||
### Coordinator |
||||
### |
||||
|
||||
|
||||
def coordinator_step( |
||||
pmsgs: List[ParticipantMsg], |
||||
t: int, |
||||
enckeys: List[bytes], |
||||
) -> Tuple[CoordinatorMsg, simplpedpop.DKGOutput, bytes, List[Scalar]]: |
||||
n = len(enckeys) |
||||
if n != len(pmsgs): |
||||
raise ValueError |
||||
|
||||
simpl_pmsgs = [pmsg.simpl_pmsg for pmsg in pmsgs] |
||||
simpl_cmsg, dkg_output, eq_input = simplpedpop.coordinator_step(simpl_pmsgs, t, n) |
||||
pubnonces = [pmsg.pubnonce for pmsg in pmsgs] |
||||
for i in range(n): |
||||
if len(pmsgs[i].enc_shares) != n: |
||||
raise FaultyParticipantOrCoordinatorError( |
||||
i, "Participant sent enc_shares with invalid length" |
||||
) |
||||
enc_secshares = [ |
||||
Scalar.sum(*([pmsg.enc_shares[i] for pmsg in pmsgs])) for i in range(n) |
||||
] |
||||
eq_input += b"".join(enckeys) + b"".join(pubnonces) |
||||
|
||||
# In ChillDKG, the coordinator needs to broadcast the entire enc_secshares |
||||
# array to all participants. But in pure EncPedPop, the coordinator needs to |
||||
# send to each participant i only their entry enc_secshares[i]. |
||||
# |
||||
# Since broadcasting the entire array is not necessary, we don't include it |
||||
# in encpedpop.CoordinatorMsg, but only return it as a side output, so that |
||||
# chilldkg.coordinator_step can pick it up. Implementations of pure |
||||
# EncPedPop will need to decide how to transmit enc_secshares[i] to |
||||
# participant i for participant_step2(); we leave this unspecified. |
||||
return ( |
||||
CoordinatorMsg(simpl_cmsg, pubnonces), |
||||
dkg_output, |
||||
eq_input, |
||||
enc_secshares, |
||||
) |
||||
|
||||
|
||||
def coordinator_investigate( |
||||
pmsgs: List[ParticipantMsg], |
||||
) -> List[CoordinatorInvestigationMsg]: |
||||
n = len(pmsgs) |
||||
simpl_pmsgs = [pmsg.simpl_pmsg for pmsg in pmsgs] |
||||
|
||||
all_enc_partial_secshares = [ |
||||
[pmsg.enc_shares[i] for pmsg in pmsgs] for i in range(n) |
||||
] |
||||
simpl_cinvs = simplpedpop.coordinator_investigate(simpl_pmsgs) |
||||
cinvs = [ |
||||
CoordinatorInvestigationMsg( |
||||
all_enc_partial_secshares[i], simpl_cinvs[i].partial_pubshares |
||||
) |
||||
for i in range(n) |
||||
] |
||||
return cinvs |
||||
@ -0,0 +1,316 @@
|
||||
from secrets import token_bytes as random_bytes |
||||
from typing import List, NamedTuple, NewType, Tuple, Optional, NoReturn |
||||
|
||||
from ..secp256k1proto.bip340 import schnorr_sign, schnorr_verify |
||||
from ..secp256k1proto.secp256k1 import GE, Scalar |
||||
from .util import ( |
||||
BIP_TAG, |
||||
FaultyParticipantOrCoordinatorError, |
||||
FaultyCoordinatorError, |
||||
UnknownFaultyParticipantOrCoordinatorError, |
||||
) |
||||
from .vss import VSS, VSSCommitment |
||||
|
||||
|
||||
### |
||||
### Exceptions |
||||
### |
||||
|
||||
|
||||
class SecshareSumError(ValueError): |
||||
pass |
||||
|
||||
|
||||
### |
||||
### Proofs of possession (pops) |
||||
### |
||||
|
||||
|
||||
Pop = NewType("Pop", bytes) |
||||
|
||||
POP_MSG_TAG = BIP_TAG + "pop message" |
||||
|
||||
|
||||
def pop_msg(idx: int) -> bytes: |
||||
return idx.to_bytes(4, byteorder="big") |
||||
|
||||
|
||||
def pop_prove(seckey: bytes, idx: int, aux_rand: bytes = 32 * b"\x00") -> Pop: |
||||
sig = schnorr_sign( |
||||
pop_msg(idx), seckey, aux_rand=random_bytes(32), tag_prefix=POP_MSG_TAG |
||||
) |
||||
return Pop(sig) |
||||
|
||||
|
||||
def pop_verify(pop: Pop, pubkey: bytes, idx: int) -> bool: |
||||
return schnorr_verify(pop_msg(idx), pubkey, pop, tag_prefix=POP_MSG_TAG) |
||||
|
||||
|
||||
### |
||||
### Messages |
||||
### |
||||
|
||||
|
||||
class ParticipantMsg(NamedTuple): |
||||
com: VSSCommitment |
||||
pop: Pop |
||||
|
||||
|
||||
class CoordinatorMsg(NamedTuple): |
||||
coms_to_secrets: List[GE] |
||||
sum_coms_to_nonconst_terms: List[GE] |
||||
pops: List[Pop] |
||||
|
||||
def to_bytes(self) -> bytes: |
||||
return b"".join( |
||||
[ |
||||
P.to_bytes_compressed_with_infinity() |
||||
for P in self.coms_to_secrets + self.sum_coms_to_nonconst_terms |
||||
] |
||||
) + b"".join(self.pops) |
||||
|
||||
|
||||
class CoordinatorInvestigationMsg(NamedTuple): |
||||
partial_pubshares: List[GE] |
||||
|
||||
|
||||
### |
||||
### Other common definitions |
||||
### |
||||
|
||||
|
||||
class DKGOutput(NamedTuple): |
||||
secshare: Optional[bytes] # None for coordinator |
||||
threshold_pubkey: bytes |
||||
pubshares: List[bytes] |
||||
|
||||
|
||||
def assemble_sum_coms( |
||||
coms_to_secrets: List[GE], sum_coms_to_nonconst_terms: List[GE] |
||||
) -> VSSCommitment: |
||||
# Sum the commitments to the secrets |
||||
return VSSCommitment( |
||||
[GE.sum(*(c for c in coms_to_secrets))] + sum_coms_to_nonconst_terms |
||||
) |
||||
|
||||
|
||||
### |
||||
### Participant |
||||
### |
||||
|
||||
|
||||
class ParticipantState(NamedTuple): |
||||
t: int |
||||
n: int |
||||
idx: int |
||||
com_to_secret: GE |
||||
|
||||
|
||||
class ParticipantInvestigationData(NamedTuple): |
||||
n: int |
||||
idx: int |
||||
secshare: Scalar |
||||
pubshare: GE |
||||
|
||||
|
||||
# To keep the algorithms of SimplPedPop and EncPedPop purely non-interactive |
||||
# computations, we omit explicit invocations of an interactive equality check |
||||
# protocol. ChillDKG will take care of invoking the equality check protocol. |
||||
|
||||
|
||||
def participant_step1( |
||||
seed: bytes, t: int, n: int, idx: int |
||||
) -> Tuple[ |
||||
ParticipantState, |
||||
ParticipantMsg, |
||||
# The following return value is a list of n partial secret shares generated |
||||
# by this participant. The item at index i is supposed to be made available |
||||
# to participant i privately, e.g., via an external secure channel. See also |
||||
# the function participant_step2_prepare_secshare(). |
||||
List[Scalar], |
||||
]: |
||||
if t > n: |
||||
raise ValueError |
||||
if idx >= n: |
||||
raise IndexError |
||||
if len(seed) != 32: |
||||
raise ValueError |
||||
|
||||
vss = VSS.generate(seed, t) # OverflowError if t >= 2**32 |
||||
partial_secshares_from_me = vss.secshares(n) |
||||
pop = pop_prove(vss.secret().to_bytes(), idx) |
||||
|
||||
com = vss.commit() |
||||
com_to_secret = com.commitment_to_secret() |
||||
msg = ParticipantMsg(com, pop) |
||||
state = ParticipantState(t, n, idx, com_to_secret) |
||||
return state, msg, partial_secshares_from_me |
||||
|
||||
|
||||
# Helper function to prepare the secshare for participant idx's |
||||
# participant_step2() by summing the partial_secshares returned by all |
||||
# participants' participant_step1(). |
||||
# |
||||
# In a pure run of SimplPedPop where secret shares are sent via external secure |
||||
# channels (i.e., EncPedPop is not used), each participant needs to run this |
||||
# function in preparation of their participant_step2(). Since this computation |
||||
# involves secret data, it cannot be delegated to the coordinator as opposed to |
||||
# other aggregation steps. |
||||
# |
||||
# If EncPedPop is used instead (as a wrapper of SimplPedPop), the coordinator |
||||
# can securely aggregate the encrypted partial secshares into an encrypted |
||||
# secshare by exploiting the additively homomorphic property of the encryption. |
||||
def participant_step2_prepare_secshare( |
||||
partial_secshares: List[Scalar], |
||||
) -> Scalar: |
||||
secshare: Scalar # REVIEW Work around missing type annotation of Scalar.sum |
||||
secshare = Scalar.sum(*partial_secshares) |
||||
return secshare |
||||
|
||||
|
||||
def participant_step2( |
||||
state: ParticipantState, |
||||
cmsg: CoordinatorMsg, |
||||
secshare: Scalar, |
||||
) -> Tuple[DKGOutput, bytes]: |
||||
t, n, idx, com_to_secret = state |
||||
coms_to_secrets, sum_coms_to_nonconst_terms, pops = cmsg |
||||
|
||||
assert len(coms_to_secrets) == n |
||||
assert len(sum_coms_to_nonconst_terms) == t - 1 |
||||
assert len(pops) == n |
||||
|
||||
if coms_to_secrets[idx] != com_to_secret: |
||||
raise FaultyCoordinatorError( |
||||
"Coordinator sent unexpected first group element for local index" |
||||
) |
||||
|
||||
for i in range(n): |
||||
if i == idx: |
||||
# No need to check our own pop. |
||||
continue |
||||
if coms_to_secrets[i].infinity: |
||||
raise FaultyParticipantOrCoordinatorError( |
||||
i, "Participant sent invalid commitment" |
||||
) |
||||
# This can be optimized: We serialize the coms_to_secrets[i] here, but |
||||
# schnorr_verify (inside pop_verify) will need to deserialize it again, which |
||||
# involves computing a square root to obtain the y coordinate. |
||||
if not pop_verify(pops[i], coms_to_secrets[i].to_bytes_xonly(), i): |
||||
raise FaultyParticipantOrCoordinatorError( |
||||
i, "Participant sent invalid proof-of-knowledge" |
||||
) |
||||
|
||||
sum_coms = assemble_sum_coms(coms_to_secrets, sum_coms_to_nonconst_terms) |
||||
# Verifying the tweaked secshare against the tweaked pubshare is equivalent |
||||
# to verifying the untweaked secshare against the untweaked pubshare, but |
||||
# avoids computing the untweaked pubshare in the happy path and thereby |
||||
# moves a group addition to the error path. |
||||
sum_coms_tweaked, tweak, pubtweak = sum_coms.invalid_taproot_commit() |
||||
pubshare_tweaked = sum_coms_tweaked.pubshare(idx) |
||||
secshare_tweaked = secshare + tweak |
||||
if not VSSCommitment.verify_secshare(secshare_tweaked, pubshare_tweaked): |
||||
pubshare = pubshare_tweaked - pubtweak |
||||
raise UnknownFaultyParticipantOrCoordinatorError( |
||||
ParticipantInvestigationData(n, idx, secshare, pubshare), |
||||
"Received invalid secshare, " |
||||
"consider investigation procedure to determine faulty party", |
||||
) |
||||
|
||||
threshold_pubkey = sum_coms_tweaked.commitment_to_secret() |
||||
pubshares = [ |
||||
sum_coms_tweaked.pubshare(i) if i != idx else pubshare_tweaked for i in range(n) |
||||
] |
||||
dkg_output = DKGOutput( |
||||
secshare_tweaked.to_bytes(), |
||||
threshold_pubkey.to_bytes_compressed(), |
||||
[pubshare.to_bytes_compressed() for pubshare in pubshares], |
||||
) |
||||
eq_input = t.to_bytes(4, byteorder="big") + sum_coms.to_bytes() |
||||
return dkg_output, eq_input |
||||
|
||||
|
||||
def participant_investigate( |
||||
error: UnknownFaultyParticipantOrCoordinatorError, |
||||
cinv: CoordinatorInvestigationMsg, |
||||
partial_secshares: List[Scalar], |
||||
) -> NoReturn: |
||||
n, idx, secshare, pubshare = error.inv_data |
||||
partial_pubshares = cinv.partial_pubshares |
||||
|
||||
if GE.sum(*partial_pubshares) != pubshare: |
||||
raise FaultyCoordinatorError("Sum of partial pubshares not equal to pubshare") |
||||
|
||||
if Scalar.sum(*partial_secshares) != secshare: |
||||
raise SecshareSumError("Sum of partial secshares not equal to secshare") |
||||
|
||||
for i in range(n): |
||||
if not VSSCommitment.verify_secshare( |
||||
partial_secshares[i], partial_pubshares[i] |
||||
): |
||||
if i != idx: |
||||
raise FaultyParticipantOrCoordinatorError( |
||||
i, "Participant sent invalid partial secshare" |
||||
) |
||||
else: |
||||
# We are not faulty, so the coordinator must be. |
||||
raise FaultyCoordinatorError( |
||||
"Coordinator fiddled with the share from me to myself" |
||||
) |
||||
|
||||
# We now know: |
||||
# - The sum of the partial secshares is equal to the secshare. |
||||
# - The sum of the partial pubshares is equal to the pubshare. |
||||
# - Every partial secshare matches its corresponding partial pubshare. |
||||
# Hence, the secshare matches the pubshare. |
||||
assert VSSCommitment.verify_secshare(secshare, pubshare) |
||||
|
||||
# This should never happen (unless the caller fiddled with the inputs). |
||||
raise RuntimeError( |
||||
"participant_investigate() was called, but all inputs are consistent." |
||||
) |
||||
|
||||
|
||||
### |
||||
### Coordinator |
||||
### |
||||
|
||||
|
||||
def coordinator_step( |
||||
pmsgs: List[ParticipantMsg], t: int, n: int |
||||
) -> Tuple[CoordinatorMsg, DKGOutput, bytes]: |
||||
# Sum the commitments to the i-th coefficients for i > 0 |
||||
# |
||||
# This procedure corresponds to the one described by Pedersen in Section 5.1 |
||||
# of "Non-Interactive and Information-Theoretic Secure Verifiable Secret |
||||
# Sharing". However, we don't sum the commitments to the secrets (i == 0) |
||||
# because they'll be necessary to check the pops. |
||||
coms_to_secrets = [pmsg.com.commitment_to_secret() for pmsg in pmsgs] |
||||
# But we can sum the commitments to the non-constant terms. |
||||
sum_coms_to_nonconst_terms = [ |
||||
GE.sum(*(pmsg.com.commitment_to_nonconst_terms()[j] for pmsg in pmsgs)) |
||||
for j in range(t - 1) |
||||
] |
||||
pops = [pmsg.pop for pmsg in pmsgs] |
||||
cmsg = CoordinatorMsg(coms_to_secrets, sum_coms_to_nonconst_terms, pops) |
||||
|
||||
sum_coms = assemble_sum_coms(coms_to_secrets, sum_coms_to_nonconst_terms) |
||||
sum_coms_tweaked, _, _ = sum_coms.invalid_taproot_commit() |
||||
threshold_pubkey = sum_coms_tweaked.commitment_to_secret() |
||||
pubshares = [sum_coms_tweaked.pubshare(i) for i in range(n)] |
||||
|
||||
dkg_output = DKGOutput( |
||||
None, |
||||
threshold_pubkey.to_bytes_compressed(), |
||||
[pubshare.to_bytes_compressed() for pubshare in pubshares], |
||||
) |
||||
eq_input = t.to_bytes(4, byteorder="big") + sum_coms.to_bytes() |
||||
return cmsg, dkg_output, eq_input |
||||
|
||||
|
||||
def coordinator_investigate( |
||||
pmsgs: List[ParticipantMsg], |
||||
) -> List[CoordinatorInvestigationMsg]: |
||||
n = len(pmsgs) |
||||
all_partial_pubshares = [[pmsg.com.pubshare(i) for pmsg in pmsgs] for i in range(n)] |
||||
return [CoordinatorInvestigationMsg(all_partial_pubshares[i]) for i in range(n)] |
||||
@ -0,0 +1,103 @@
|
||||
from typing import Any |
||||
|
||||
from ..secp256k1proto.util import tagged_hash |
||||
|
||||
|
||||
BIP_TAG = "BIP DKG/" |
||||
|
||||
|
||||
def tagged_hash_bip_dkg(tag: str, msg: bytes) -> bytes: |
||||
return tagged_hash(BIP_TAG + tag, msg) |
||||
|
||||
|
||||
class ProtocolError(Exception): |
||||
"""Base exception for errors caused by received protocol messages.""" |
||||
|
||||
|
||||
class FaultyParticipantError(ProtocolError): |
||||
"""Raised if a participant is faulty. |
||||
|
||||
This exception is raised by the coordinator code when it detects faulty |
||||
behavior by a participant, i.e., a participant has deviated from the |
||||
protocol. The index of the participant is provided as part of the exception. |
||||
Assuming protocol messages have been transmitted correctly and the |
||||
coordinator itself is not faulty, this exception implies that the |
||||
participant is indeed faulty. |
||||
|
||||
This exception is raised only by the coordinator code. Some faulty behavior |
||||
by participants will be detected by the other participants instead. |
||||
See `FaultyParticipantOrCoordinatorError` for details. |
||||
|
||||
Attributes: |
||||
participant (int): Index of the faulty participant. |
||||
""" |
||||
|
||||
def __init__(self, participant: int, *args: Any): |
||||
self.participant = participant |
||||
super().__init__(participant, *args) |
||||
|
||||
|
||||
class FaultyParticipantOrCoordinatorError(ProtocolError): |
||||
"""Raised if another known participant or the coordinator is faulty. |
||||
|
||||
This exception is raised by the participant code when it detects what looks |
||||
like faulty behavior by a suspected participant. The index of the suspected |
||||
participant is provided as part of the exception. |
||||
|
||||
Importantly, this exception is not proof that the suspected participant is |
||||
indeed faulty. It is instead possible that the coordinator has deviated from |
||||
the protocol in a way that makes it look as if the suspected participant has |
||||
deviated from the protocol. In other words, assuming messages have been |
||||
transmitted correctly and the raising participant is not faulty, this |
||||
exception implies that |
||||
- the suspected participant is faulty, |
||||
- *or* the coordinator is faulty (and has framed the suspected |
||||
participant). |
||||
|
||||
This exception is raised only by the participant code. Some faulty behavior |
||||
by participants will be detected by the coordinator instead. See |
||||
`FaultyParticipantError` for details. |
||||
|
||||
Attributes: |
||||
participant (int): Index of the suspected participant. |
||||
""" |
||||
|
||||
def __init__(self, participant: int, *args: Any): |
||||
self.participant = participant |
||||
super().__init__(participant, *args) |
||||
|
||||
|
||||
class FaultyCoordinatorError(ProtocolError): |
||||
"""Raised if the coordinator is faulty. |
||||
|
||||
This exception is raised by the participant code when it detects faulty |
||||
behavior by the coordinator, i.e., the coordinator has deviated from the |
||||
protocol. Assuming protocol messages have been transmitted correctly and the |
||||
raising participant is not faulty, this exception implies that the |
||||
coordinator is indeed faulty. |
||||
""" |
||||
|
||||
|
||||
class UnknownFaultyParticipantOrCoordinatorError(ProtocolError): |
||||
"""Raised if another unknown participant or the coordinator is faulty. |
||||
|
||||
This exception is raised by the participant code when it detects what looks |
||||
like faulty behavior by some other participant, but there is insufficient |
||||
information to determine which participant should be suspected. |
||||
|
||||
To determine a suspected participant, the raising participant may choose to |
||||
run the optional investigation procedure of the protocol, which requires |
||||
obtaining an investigation message from the coordinator. See the |
||||
`participant_investigate` function for details. |
||||
|
||||
This is only raised for specific faulty behavior by another participant |
||||
which cannot be attributed to another participant without further help of |
||||
the coordinator (namely, sending invalid encrypted secret shares). |
||||
|
||||
Attributes: |
||||
inv_data: Information required to perform the investigation. |
||||
""" |
||||
|
||||
def __init__(self, inv_data: Any, *args: Any): |
||||
self.inv_data = inv_data |
||||
super().__init__(*args) |
||||
@ -0,0 +1,146 @@
|
||||
from __future__ import annotations |
||||
|
||||
from typing import List, Tuple |
||||
|
||||
from ..secp256k1proto.secp256k1 import GE, G, Scalar |
||||
from ..secp256k1proto.util import tagged_hash |
||||
|
||||
from .util import tagged_hash_bip_dkg |
||||
|
||||
|
||||
class Polynomial: |
||||
# A scalar polynomial. |
||||
# |
||||
# A polynomial f of degree at most t - 1 is represented by a list `coeffs` |
||||
# of t coefficients, i.e., f(x) = coeffs[0] + ... + coeffs[t-1] * |
||||
# x^(t-1).""" |
||||
coeffs: List[Scalar] |
||||
|
||||
def __init__(self, coeffs: List[Scalar]) -> None: |
||||
self.coeffs = coeffs |
||||
|
||||
def eval(self, x: Scalar) -> Scalar: |
||||
# Evaluate a polynomial at position x. |
||||
|
||||
value = Scalar(0) |
||||
# Reverse coefficients to compute evaluation via Horner's method |
||||
for coeff in self.coeffs[::-1]: |
||||
value = value * x + coeff |
||||
return value |
||||
|
||||
def __call__(self, x: Scalar) -> Scalar: |
||||
return self.eval(x) |
||||
|
||||
|
||||
class VSSCommitment: |
||||
ges: List[GE] |
||||
|
||||
def __init__(self, ges: List[GE]) -> None: |
||||
self.ges = ges |
||||
|
||||
def t(self) -> int: |
||||
return len(self.ges) |
||||
|
||||
def pubshare(self, i: int) -> GE: |
||||
pubshare: GE = GE.batch_mul( |
||||
*(((i + 1) ** j, self.ges[j]) for j in range(0, len(self.ges))) |
||||
) |
||||
return pubshare |
||||
|
||||
@staticmethod |
||||
def verify_secshare(secshare: Scalar, pubshare: GE) -> bool: |
||||
# The caller needs to provide the correct pubshare(i) |
||||
actual = secshare * G |
||||
valid: bool = actual == pubshare |
||||
return valid |
||||
|
||||
def to_bytes(self) -> bytes: |
||||
# Return commitments to the coefficients of f. |
||||
return b"".join([ge.to_bytes_compressed_with_infinity() for ge in self.ges]) |
||||
|
||||
def __add__(self, other: VSSCommitment) -> VSSCommitment: |
||||
assert self.t() == other.t() |
||||
return VSSCommitment([self.ges[i] + other.ges[i] for i in range(self.t())]) |
||||
|
||||
@staticmethod |
||||
def from_bytes_and_t(b: bytes, t: int) -> VSSCommitment: |
||||
if len(b) != 33 * t: |
||||
raise ValueError |
||||
ges = [GE.from_bytes_compressed(b[i : i + 33]) for i in range(0, 33 * t, 33)] |
||||
return VSSCommitment(ges) |
||||
|
||||
def commitment_to_secret(self) -> GE: |
||||
return self.ges[0] |
||||
|
||||
def commitment_to_nonconst_terms(self) -> List[GE]: |
||||
return self.ges[1 : self.t()] |
||||
|
||||
def invalid_taproot_commit(self) -> Tuple[VSSCommitment, Scalar, GE]: |
||||
# Return a modified VSS commitment such that the threshold public key |
||||
# generated from it has an unspendable BIP 341 Taproot script path. |
||||
# |
||||
# Specifically, for a VSS commitment `com`, we have: |
||||
# `com.invalid_taproot_commit().commitment_to_secret() = com.commitment_to_secret() + t*G`. |
||||
# |
||||
# The tweak `t` commits to an empty message, which is invalid according |
||||
# to BIP 341 for Taproot script spends. This follows BIP 341's |
||||
# recommended approach for committing to an unspendable script path. |
||||
# |
||||
# This prevents a malicious participant from secretly inserting a *valid* |
||||
# Taproot commitment to a script path into the summed VSS commitment during |
||||
# the DKG protocol. If the resulting threshold public key was used directly |
||||
# in a BIP 341 Taproot output, the malicious participant would be able to |
||||
# spend the output using their hidden script path. |
||||
# |
||||
# The function returns the updated VSS commitment and the tweak `t` which |
||||
# must be added to all secret shares of the commitment. |
||||
pk = self.commitment_to_secret() |
||||
secshare_tweak = Scalar.from_bytes( |
||||
tagged_hash("TapTweak", pk.to_bytes_compressed()) |
||||
) |
||||
pubshare_tweak = secshare_tweak * G |
||||
vss_tweak = VSSCommitment([pubshare_tweak] + [GE()] * (self.t() - 1)) |
||||
return (self + vss_tweak, secshare_tweak, pubshare_tweak) |
||||
|
||||
|
||||
class VSS: |
||||
f: Polynomial |
||||
|
||||
def __init__(self, f: Polynomial) -> None: |
||||
self.f = f |
||||
|
||||
@staticmethod |
||||
def generate(seed: bytes, t: int) -> VSS: |
||||
coeffs = [ |
||||
Scalar.from_bytes( |
||||
tagged_hash_bip_dkg("vss coeffs", seed + i.to_bytes(4, byteorder="big")) |
||||
) |
||||
for i in range(t) |
||||
] |
||||
return VSS(Polynomial(coeffs)) |
||||
|
||||
def secshare_for(self, i: int) -> Scalar: |
||||
# Return the secret share for the participant with index i. |
||||
# |
||||
# This computes f(i+1). |
||||
if i < 0: |
||||
raise ValueError(f"Invalid participant index: {i}") |
||||
x = Scalar(i + 1) |
||||
# Ensure we don't compute f(0), which is the secret. |
||||
assert x != Scalar(0) |
||||
return self.f(x) |
||||
|
||||
def secshares(self, n: int) -> List[Scalar]: |
||||
# Return the secret shares for the participants with indices 0..n-1. |
||||
# |
||||
# This computes [f(1), ..., f(n)]. |
||||
return [self.secshare_for(i) for i in range(0, n)] |
||||
|
||||
def commit(self) -> VSSCommitment: |
||||
return VSSCommitment([c * G for c in self.f.coeffs]) |
||||
|
||||
def secret(self) -> Scalar: |
||||
# Return the secret to be shared. |
||||
# |
||||
# This computes f(0). |
||||
return self.f.coeffs[0] |
||||
@ -0,0 +1,450 @@
|
||||
# -*- coding: utf-8 -*- |
||||
|
||||
# BIP FROST Signing reference implementation |
||||
# |
||||
# It's worth noting that many functions, types, and exceptions were directly |
||||
# copied or modified from the MuSig2 (BIP 327) reference code, found at: |
||||
# https://github.com/bitcoin/bips/blob/master/bip-0327/reference.py |
||||
# |
||||
# WARNING: This implementation is for demonstration purposes only and _not_ to |
||||
# be used in production environments. The code is vulnerable to timing attacks, |
||||
# for example. |
||||
|
||||
from typing import Any, List, Optional, Tuple, NewType, NamedTuple |
||||
import itertools |
||||
import secrets |
||||
import time |
||||
|
||||
from .utils.bip340 import * |
||||
|
||||
PlainPk = NewType('PlainPk', bytes) |
||||
XonlyPk = NewType('XonlyPk', bytes) |
||||
|
||||
# There are two types of exceptions that can be raised by this implementation: |
||||
# - ValueError for indicating that an input doesn't conform to some function |
||||
# precondition (e.g. an input array is the wrong length, a serialized |
||||
# representation doesn't have the correct format). |
||||
# - InvalidContributionError for indicating that a signer (or the |
||||
# aggregator) is misbehaving in the protocol. |
||||
# |
||||
# Assertions are used to (1) satisfy the type-checking system, and (2) check for |
||||
# inconvenient events that can't happen except with negligible probability (e.g. |
||||
# output of a hash function is 0) and can't be manually triggered by any |
||||
# signer. |
||||
|
||||
# This exception is raised if a party (signer or nonce aggregator) sends invalid |
||||
# values. Actual implementations should not crash when receiving invalid |
||||
# contributions. Instead, they should hold the offending party accountable. |
||||
class InvalidContributionError(Exception): |
||||
def __init__(self, signer_id, contrib): |
||||
# participant identifier of the signer who sent the invalid value |
||||
self.id = signer_id |
||||
# contrib is one of "pubkey", "pubnonce", "aggnonce", or "psig". |
||||
self.contrib = contrib |
||||
|
||||
infinity = None |
||||
|
||||
def xbytes(P: Point) -> bytes: |
||||
return bytes_from_int(x(P)) |
||||
|
||||
def cbytes(P: Point) -> bytes: |
||||
a = b'\x02' if has_even_y(P) else b'\x03' |
||||
return a + xbytes(P) |
||||
|
||||
def cbytes_ext(P: Optional[Point]) -> bytes: |
||||
if is_infinite(P): |
||||
return (0).to_bytes(33, byteorder='big') |
||||
assert P is not None |
||||
return cbytes(P) |
||||
|
||||
def point_negate(P: Optional[Point]) -> Optional[Point]: |
||||
if P is None: |
||||
return P |
||||
return (x(P), p - y(P)) |
||||
|
||||
def cpoint(x: bytes) -> Point: |
||||
if len(x) != 33: |
||||
raise ValueError('x is not a valid compressed point.') |
||||
P = lift_x(x[1:33]) |
||||
if P is None: |
||||
raise ValueError('x is not a valid compressed point.') |
||||
if x[0] == 2: |
||||
return P |
||||
elif x[0] == 3: |
||||
P = point_negate(P) |
||||
assert P is not None |
||||
return P |
||||
else: |
||||
raise ValueError('x is not a valid compressed point.') |
||||
|
||||
def cpoint_ext(x: bytes) -> Optional[Point]: |
||||
if x == (0).to_bytes(33, 'big'): |
||||
return None |
||||
else: |
||||
return cpoint(x) |
||||
|
||||
def int_ids(lst: List[bytes]) -> List[int]: |
||||
res = [] |
||||
for x in lst: |
||||
id_ = int_from_bytes(x) |
||||
#todo: add check for < max_participants? |
||||
if not 1 <= id_ < n: |
||||
raise ValueError('x is not a valid participant identifier.') |
||||
res.append(id_) |
||||
return res |
||||
|
||||
# Return the plain public key corresponding to a given secret key |
||||
def individual_pk(seckey: bytes) -> PlainPk: |
||||
d0 = int_from_bytes(seckey) |
||||
if not (1 <= d0 <= n - 1): |
||||
raise ValueError('The secret key must be an integer in the range 1..n-1.') |
||||
P = point_mul(G, d0) |
||||
assert P is not None |
||||
return PlainPk(cbytes(P)) |
||||
|
||||
def derive_interpolating_value_internal(L: List[int], x_i: int) -> int: |
||||
num, deno = 1, 1 |
||||
for x_j in L: |
||||
if x_j == x_i: |
||||
continue |
||||
num *= x_j |
||||
deno *= (x_j - x_i) |
||||
return num * pow(deno, n - 2, n) % n |
||||
|
||||
def derive_interpolating_value(ids: List[bytes], my_id: bytes) -> int: |
||||
if not my_id in ids: |
||||
raise ValueError('The signer\'s id must be present in the participant identifier list.') |
||||
if not all(ids.count(my_id) <= 1 for my_id in ids): |
||||
raise ValueError('The participant identifier list must contain unique elements.') |
||||
#todo: turn this into raise ValueError? |
||||
assert 1 <= int_from_bytes(my_id) < n |
||||
integer_ids = int_ids(ids) |
||||
return derive_interpolating_value_internal(integer_ids, int_from_bytes(my_id)) |
||||
|
||||
def check_pubshares_correctness(secshares: List[bytes], pubshares: List[PlainPk]) -> bool: |
||||
assert len(secshares) == len(pubshares) |
||||
for secshare, pubshare in zip(secshares, pubshares): |
||||
if not individual_pk(secshare) == pubshare: |
||||
return False |
||||
return True |
||||
|
||||
def check_group_pubkey_correctness(min_participants: int, group_pk: PlainPk, ids: List[bytes], pubshares: List[PlainPk]) -> bool: |
||||
assert len(ids) == len(pubshares) |
||||
assert len(ids) >= min_participants |
||||
|
||||
max_participants = len(ids) |
||||
# loop through all possible number of signers |
||||
for signer_count in range(min_participants, max_participants + 1): |
||||
# loop through all possible signer sets with length `signer_count` |
||||
for signer_set in itertools.combinations(zip(ids, pubshares), signer_count): |
||||
signer_ids = [pid for pid, pubshare in signer_set] |
||||
signer_pubshares = [pubshare for pid, pubshare in signer_set] |
||||
expected_pk = derive_group_pubkey(signer_pubshares, signer_ids) |
||||
if expected_pk != group_pk: |
||||
return False |
||||
return True |
||||
|
||||
def check_frost_key_compatibility(max_participants: int, min_participants: int, group_pk: PlainPk, ids: List[bytes], secshares: List[bytes], pubshares: List[PlainPk]) -> bool: |
||||
if not max_participants >= min_participants > 1: |
||||
return False |
||||
if not len(ids) == len(secshares) == len(pubshares) == max_participants: |
||||
return False |
||||
pubshare_check = check_pubshares_correctness(secshares, pubshares) |
||||
group_pk_check = check_group_pubkey_correctness(min_participants, group_pk, ids, pubshares) |
||||
return pubshare_check and group_pk_check |
||||
|
||||
TweakContext = NamedTuple('TweakContext', [('Q', Point), |
||||
('gacc', int), |
||||
('tacc', int)]) |
||||
AGGREGATOR_ID = b'aggregator' |
||||
|
||||
def get_xonly_pk(tweak_ctx: TweakContext) -> XonlyPk: |
||||
Q, _, _ = tweak_ctx |
||||
return XonlyPk(xbytes(Q)) |
||||
|
||||
def get_plain_pk(tweak_ctx: TweakContext) -> PlainPk: |
||||
Q, _, _ = tweak_ctx |
||||
return PlainPk(cbytes(Q)) |
||||
|
||||
#nit: switch the args ordering |
||||
def derive_group_pubkey(pubshares: List[PlainPk], ids: List[bytes]) -> PlainPk: |
||||
assert len(pubshares) == len(ids) |
||||
assert AGGREGATOR_ID not in ids |
||||
Q = infinity |
||||
for my_id, pubshare in zip(ids, pubshares): |
||||
try: |
||||
X_i = cpoint(pubshare) |
||||
except ValueError: |
||||
raise InvalidContributionError(int_from_bytes(my_id), "pubshare") |
||||
lam_i = derive_interpolating_value(ids, my_id) |
||||
Q = point_add(Q, point_mul(X_i, lam_i)) |
||||
# Q is not the point at infinity except with negligible probability. |
||||
assert(Q is not infinity) |
||||
return PlainPk(cbytes(Q)) |
||||
|
||||
def tweak_ctx_init(pubshares: List[PlainPk], ids: List[bytes]) -> TweakContext: |
||||
group_pk = derive_group_pubkey(pubshares, ids) |
||||
Q = cpoint(group_pk) |
||||
gacc = 1 |
||||
tacc = 0 |
||||
return TweakContext(Q, gacc, tacc) |
||||
|
||||
def apply_tweak(tweak_ctx: TweakContext, tweak: bytes, is_xonly: bool) -> TweakContext: |
||||
if len(tweak) != 32: |
||||
raise ValueError('The tweak must be a 32-byte array.') |
||||
Q, gacc, tacc = tweak_ctx |
||||
if is_xonly and not has_even_y(Q): |
||||
g = n - 1 |
||||
else: |
||||
g = 1 |
||||
t = int_from_bytes(tweak) |
||||
if t >= n: |
||||
raise ValueError('The tweak must be less than n.') |
||||
Q_ = point_add(point_mul(Q, g), point_mul(G, t)) |
||||
if Q_ is None: |
||||
raise ValueError('The result of tweaking cannot be infinity.') |
||||
gacc_ = g * gacc % n |
||||
tacc_ = (t + g * tacc) % n |
||||
return TweakContext(Q_, gacc_, tacc_) |
||||
|
||||
def bytes_xor(a: bytes, b: bytes) -> bytes: |
||||
return bytes(x ^ y for x, y in zip(a, b)) |
||||
|
||||
def nonce_hash(rand: bytes, pubshare: PlainPk, group_pk: XonlyPk, i: int, msg_prefixed: bytes, extra_in: bytes) -> int: |
||||
buf = b'' |
||||
buf += rand |
||||
buf += len(pubshare).to_bytes(1, 'big') |
||||
buf += pubshare |
||||
buf += len(group_pk).to_bytes(1, 'big') |
||||
buf += group_pk |
||||
buf += msg_prefixed |
||||
buf += len(extra_in).to_bytes(4, 'big') |
||||
buf += extra_in |
||||
buf += i.to_bytes(1, 'big') |
||||
return int_from_bytes(tagged_hash('FROST/nonce', buf)) |
||||
|
||||
def nonce_gen_internal(rand_: bytes, secshare: Optional[bytes], pubshare: Optional[PlainPk], group_pk: Optional[XonlyPk], msg: Optional[bytes], extra_in: Optional[bytes]) -> Tuple[bytearray, bytes]: |
||||
if secshare is not None: |
||||
rand = bytes_xor(secshare, tagged_hash('FROST/aux', rand_)) |
||||
else: |
||||
rand = rand_ |
||||
if pubshare is None: |
||||
pubshare = PlainPk(b'') |
||||
if group_pk is None: |
||||
group_pk = XonlyPk(b'') |
||||
if msg is None: |
||||
msg_prefixed = b'\x00' |
||||
else: |
||||
msg_prefixed = b'\x01' |
||||
msg_prefixed += len(msg).to_bytes(8, 'big') |
||||
msg_prefixed += msg |
||||
if extra_in is None: |
||||
extra_in = b'' |
||||
k_1 = nonce_hash(rand, pubshare, group_pk, 0, msg_prefixed, extra_in) % n |
||||
k_2 = nonce_hash(rand, pubshare, group_pk, 1, msg_prefixed, extra_in) % n |
||||
# k_1 == 0 or k_2 == 0 cannot occur except with negligible probability. |
||||
assert k_1 != 0 |
||||
assert k_2 != 0 |
||||
R_s1 = point_mul(G, k_1) |
||||
R_s2 = point_mul(G, k_2) |
||||
assert R_s1 is not None |
||||
assert R_s2 is not None |
||||
pubnonce = cbytes(R_s1) + cbytes(R_s2) |
||||
# use mutable `bytearray` since secnonce need to be replaced with zeros during signing. |
||||
secnonce = bytearray(bytes_from_int(k_1) + bytes_from_int(k_2)) |
||||
return secnonce, pubnonce |
||||
|
||||
#think: can msg & extra_in be of any length here? |
||||
#think: why doesn't musig2 ref code check for `pk` length here? |
||||
def nonce_gen(secshare: Optional[bytes], pubshare: Optional[PlainPk], group_pk: Optional[XonlyPk], msg: Optional[bytes], extra_in: Optional[bytes]) -> Tuple[bytearray, bytes]: |
||||
if secshare is not None and len(secshare) != 32: |
||||
raise ValueError('The optional byte array secshare must have length 32.') |
||||
if pubshare is not None and len(pubshare) != 33: |
||||
raise ValueError('The optional byte array pubshare must have length 33.') |
||||
if group_pk is not None and len(group_pk) != 32: |
||||
raise ValueError('The optional byte array group_pk must have length 32.') |
||||
# bench: will adding individual_pk(secshare) == pubshare check, increase the execution time significantly? |
||||
rand_ = secrets.token_bytes(32) |
||||
return nonce_gen_internal(rand_, secshare, pubshare, group_pk, msg, extra_in) |
||||
|
||||
def nonce_agg(pubnonces: List[bytes], ids: List[bytes]) -> bytes: |
||||
if len(pubnonces) != len(ids): |
||||
raise ValueError('The pubnonces and ids arrays must have the same length.') |
||||
aggnonce = b'' |
||||
for j in (1, 2): |
||||
R_j = infinity |
||||
for my_id_, pubnonce in zip(ids, pubnonces): |
||||
try: |
||||
R_ij = cpoint(pubnonce[(j-1)*33:j*33]) |
||||
except ValueError: |
||||
my_id = int_from_bytes(my_id_) if my_id_ != AGGREGATOR_ID else my_id_ |
||||
raise InvalidContributionError(my_id, "pubnonce") |
||||
R_j = point_add(R_j, R_ij) |
||||
aggnonce += cbytes_ext(R_j) |
||||
return aggnonce |
||||
|
||||
SessionContext = NamedTuple('SessionContext', [('aggnonce', bytes), |
||||
('identifiers', List[bytes]), |
||||
('pubshares', List[PlainPk]), |
||||
('tweaks', List[bytes]), |
||||
('is_xonly', List[bool]), |
||||
('msg', bytes)]) |
||||
|
||||
def group_pubkey_and_tweak(pubshares: List[PlainPk], ids: List[bytes], tweaks: List[bytes], is_xonly: List[bool]) -> TweakContext: |
||||
if len(pubshares) != len(ids): |
||||
raise ValueError('The pubshares and ids arrays must have the same length.') |
||||
if len(tweaks) != len(is_xonly): |
||||
raise ValueError('The tweaks and is_xonly arrays must have the same length.') |
||||
tweak_ctx = tweak_ctx_init(pubshares, ids) |
||||
v = len(tweaks) |
||||
for i in range(v): |
||||
tweak_ctx = apply_tweak(tweak_ctx, tweaks[i], is_xonly[i]) |
||||
return tweak_ctx |
||||
|
||||
def get_session_values(session_ctx: SessionContext) -> Tuple[Point, int, int, int, Point, int]: |
||||
(aggnonce, ids, pubshares, tweaks, is_xonly, msg) = session_ctx |
||||
Q, gacc, tacc = group_pubkey_and_tweak(pubshares, ids, tweaks, is_xonly) |
||||
# sort the ids before serializing because ROAST paper considers them as a set |
||||
concat_ids = b''.join(sorted(ids)) |
||||
b = int_from_bytes(tagged_hash('FROST/noncecoef', concat_ids + aggnonce + xbytes(Q) + msg)) % n |
||||
try: |
||||
R_1 = cpoint_ext(aggnonce[0:33]) |
||||
R_2 = cpoint_ext(aggnonce[33:66]) |
||||
except ValueError: |
||||
# Nonce aggregator sent invalid nonces |
||||
raise InvalidContributionError(None, "aggnonce") |
||||
R_ = point_add(R_1, point_mul(R_2, b)) |
||||
R = R_ if not is_infinite(R_) else G |
||||
assert R is not None |
||||
e = int_from_bytes(tagged_hash('BIP0340/challenge', xbytes(R) + xbytes(Q) + msg)) % n |
||||
return (Q, gacc, tacc, b, R, e) |
||||
|
||||
def get_session_interpolating_value(session_ctx: SessionContext, my_id: bytes) -> int: |
||||
(_, ids, _, _, _, _) = session_ctx |
||||
return derive_interpolating_value(ids, my_id) |
||||
|
||||
def session_has_signer_pubshare(session_ctx: SessionContext, pubshare: bytes) -> bool: |
||||
(_, _, pubshares_list, _, _, _) = session_ctx |
||||
return pubshare in pubshares_list |
||||
|
||||
def sign(secnonce: bytearray, secshare: bytes, my_id: bytes, session_ctx: SessionContext) -> bytes: |
||||
# do we really need the below check? |
||||
# add test vector for this check if confirmed |
||||
if not 0 < int_from_bytes(my_id) < n: |
||||
raise ValueError('The signer\'s participant identifier is out of range') |
||||
(Q, gacc, _, b, R, e) = get_session_values(session_ctx) |
||||
k_1_ = int_from_bytes(secnonce[0:32]) |
||||
k_2_ = int_from_bytes(secnonce[32:64]) |
||||
# Overwrite the secnonce argument with zeros such that subsequent calls of |
||||
# sign with the same secnonce raise a ValueError. |
||||
secnonce[:] = bytearray(b'\x00'*64) |
||||
if not 0 < k_1_ < n: |
||||
raise ValueError('first secnonce value is out of range.') |
||||
if not 0 < k_2_ < n: |
||||
raise ValueError('second secnonce value is out of range.') |
||||
k_1 = k_1_ if has_even_y(R) else n - k_1_ |
||||
k_2 = k_2_ if has_even_y(R) else n - k_2_ |
||||
d_ = int_from_bytes(secshare) |
||||
if not 0 < d_ < n: |
||||
raise ValueError('The signer\'s secret share value is out of range.') |
||||
P = point_mul(G, d_) |
||||
assert P is not None |
||||
pubshare = cbytes(P) |
||||
if not session_has_signer_pubshare(session_ctx, pubshare): |
||||
raise ValueError('The signer\'s pubshare must be included in the list of pubshares.') |
||||
a = get_session_interpolating_value(session_ctx, my_id) |
||||
g = 1 if has_even_y(Q) else n - 1 |
||||
d = g * gacc * d_ % n |
||||
s = (k_1 + b * k_2 + e * a * d) % n |
||||
psig = bytes_from_int(s) |
||||
R_s1 = point_mul(G, k_1_) |
||||
R_s2 = point_mul(G, k_2_) |
||||
assert R_s1 is not None |
||||
assert R_s2 is not None |
||||
pubnonce = cbytes(R_s1) + cbytes(R_s2) |
||||
# Optional correctness check. The result of signing should pass signature verification. |
||||
assert partial_sig_verify_internal(psig, my_id, pubnonce, pubshare, session_ctx) |
||||
return psig |
||||
|
||||
#todo: should we hash the signer set (or pubshares) too? Otherwise same nonce will be generate even if the signer set changes |
||||
def det_nonce_hash(secshare_: bytes, aggothernonce: bytes, tweaked_gpk: bytes, msg: bytes, i: int) -> int: |
||||
buf = b'' |
||||
buf += secshare_ |
||||
buf += aggothernonce |
||||
buf += tweaked_gpk |
||||
buf += len(msg).to_bytes(8, 'big') |
||||
buf += msg |
||||
buf += i.to_bytes(1, 'big') |
||||
return int_from_bytes(tagged_hash('FROST/deterministic/nonce', buf)) |
||||
|
||||
def deterministic_sign(secshare: bytes, my_id: bytes, aggothernonce: bytes, ids: List[bytes], pubshares: List[PlainPk], tweaks: List[bytes], is_xonly: List[bool], msg: bytes, rand: Optional[bytes]) -> Tuple[bytes, bytes]: |
||||
if rand is not None: |
||||
secshare_ = bytes_xor(secshare, tagged_hash('FROST/aux', rand)) |
||||
else: |
||||
secshare_ = secshare |
||||
|
||||
tweaked_gpk = get_xonly_pk(group_pubkey_and_tweak(pubshares, ids, tweaks, is_xonly)) |
||||
|
||||
k_1 = det_nonce_hash(secshare_, aggothernonce, tweaked_gpk, msg, 0) % n |
||||
k_2 = det_nonce_hash(secshare_, aggothernonce, tweaked_gpk, msg, 1) % n |
||||
# k_1 == 0 or k_2 == 0 cannot occur except with negligible probability. |
||||
assert k_1 != 0 |
||||
assert k_2 != 0 |
||||
|
||||
R_s1 = point_mul(G, k_1) |
||||
R_s2 = point_mul(G, k_2) |
||||
assert R_s1 is not None |
||||
assert R_s2 is not None |
||||
pubnonce = cbytes(R_s1) + cbytes(R_s2) |
||||
secnonce = bytearray(bytes_from_int(k_1) + bytes_from_int(k_2)) |
||||
try: |
||||
aggnonce = nonce_agg([pubnonce, aggothernonce], [my_id, AGGREGATOR_ID]) |
||||
except Exception: |
||||
raise InvalidContributionError(None, "aggothernonce") |
||||
session_ctx = SessionContext(aggnonce, ids, pubshares, tweaks, is_xonly, msg) |
||||
psig = sign(secnonce, secshare, my_id, session_ctx) |
||||
return (pubnonce, psig) |
||||
|
||||
def partial_sig_verify(psig: bytes, ids: List[bytes], pubnonces: List[bytes], pubshares: List[PlainPk], tweaks: List[bytes], is_xonly: List[bool], msg: bytes, i: int) -> bool: |
||||
if not len(ids) == len(pubnonces) == len(pubshares): |
||||
raise ValueError('The ids, pubnonces and pubshares arrays must have the same length.') |
||||
if len(tweaks) != len(is_xonly): |
||||
raise ValueError('The tweaks and is_xonly arrays must have the same length.') |
||||
aggnonce = nonce_agg(pubnonces, ids) |
||||
session_ctx = SessionContext(aggnonce, ids, pubshares, tweaks, is_xonly, msg) |
||||
return partial_sig_verify_internal(psig, ids[i], pubnonces[i], pubshares[i], session_ctx) |
||||
|
||||
#todo: catch `cpoint`` ValueError and return false |
||||
def partial_sig_verify_internal(psig: bytes, my_id: bytes, pubnonce: bytes, pubshare: bytes, session_ctx: SessionContext) -> bool: |
||||
(Q, gacc, _, b, R, e) = get_session_values(session_ctx) |
||||
s = int_from_bytes(psig) |
||||
if s >= n: |
||||
return False |
||||
if not session_has_signer_pubshare(session_ctx, pubshare): |
||||
return False |
||||
R_s1 = cpoint(pubnonce[0:33]) |
||||
R_s2 = cpoint(pubnonce[33:66]) |
||||
Re_s_ = point_add(R_s1, point_mul(R_s2, b)) |
||||
Re_s = Re_s_ if has_even_y(R) else point_negate(Re_s_) |
||||
P = cpoint(pubshare) |
||||
if P is None: |
||||
return False |
||||
a = get_session_interpolating_value(session_ctx, my_id) |
||||
g = 1 if has_even_y(Q) else n - 1 |
||||
g_ = g * gacc % n |
||||
return point_mul(G, s) == point_add(Re_s, point_mul(P, e * a * g_ % n)) |
||||
|
||||
def partial_sig_agg(psigs: List[bytes], ids: List[bytes], session_ctx: SessionContext) -> bytes: |
||||
assert AGGREGATOR_ID not in ids |
||||
if len(psigs) != len(ids): |
||||
raise ValueError('The psigs and ids arrays must have the same length.') |
||||
(Q, _, tacc, _, R, e) = get_session_values(session_ctx) |
||||
s = 0 |
||||
for my_id, psig in zip(ids, psigs): |
||||
s_i = int_from_bytes(psig) |
||||
if s_i >= n: |
||||
raise InvalidContributionError(int_from_bytes(my_id), "psig") |
||||
s = (s + s_i) % n |
||||
g = 1 if has_even_y(Q) else n - 1 |
||||
s = (s + e * g * tacc) % n |
||||
return xbytes(R) + bytes_from_int(s) |
||||
@ -0,0 +1,93 @@
|
||||
# |
||||
# The following helper functions were copied from the BIP-340 reference implementation: |
||||
# https://github.com/bitcoin/bips/blob/master/bip-0340/reference.py |
||||
# |
||||
|
||||
from typing import Tuple, Optional |
||||
import hashlib |
||||
|
||||
p = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F |
||||
n = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141 |
||||
|
||||
# Points are tuples of X and Y coordinates and the point at infinity is |
||||
# represented by the None keyword. |
||||
G = (0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798, 0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8) |
||||
|
||||
Point = Tuple[int, int] |
||||
|
||||
# This implementation can be sped up by storing the midstate after hashing |
||||
# tag_hash instead of rehashing it all the time. |
||||
def tagged_hash(tag: str, msg: bytes) -> bytes: |
||||
tag_hash = hashlib.sha256(tag.encode()).digest() |
||||
return hashlib.sha256(tag_hash + tag_hash + msg).digest() |
||||
|
||||
def is_infinite(P: Optional[Point]) -> bool: |
||||
return P is None |
||||
|
||||
def x(P: Point) -> int: |
||||
assert not is_infinite(P) |
||||
return P[0] |
||||
|
||||
def y(P: Point) -> int: |
||||
assert not is_infinite(P) |
||||
return P[1] |
||||
|
||||
def point_add(P1: Optional[Point], P2: Optional[Point]) -> Optional[Point]: |
||||
if P1 is None: |
||||
return P2 |
||||
if P2 is None: |
||||
return P1 |
||||
if (x(P1) == x(P2)) and (y(P1) != y(P2)): |
||||
return None |
||||
if P1 == P2: |
||||
lam = (3 * x(P1) * x(P1) * pow(2 * y(P1), p - 2, p)) % p |
||||
else: |
||||
lam = ((y(P2) - y(P1)) * pow(x(P2) - x(P1), p - 2, p)) % p |
||||
x3 = (lam * lam - x(P1) - x(P2)) % p |
||||
return (x3, (lam * (x(P1) - x3) - y(P1)) % p) |
||||
|
||||
def point_mul(P: Optional[Point], n: int) -> Optional[Point]: |
||||
R = None |
||||
for i in range(256): |
||||
if (n >> i) & 1: |
||||
R = point_add(R, P) |
||||
P = point_add(P, P) |
||||
return R |
||||
|
||||
def bytes_from_int(x: int) -> bytes: |
||||
return x.to_bytes(32, byteorder="big") |
||||
|
||||
def lift_x(b: bytes) -> Optional[Point]: |
||||
x = int_from_bytes(b) |
||||
if x >= p: |
||||
return None |
||||
y_sq = (pow(x, 3, p) + 7) % p |
||||
y = pow(y_sq, (p + 1) // 4, p) |
||||
if pow(y, 2, p) != y_sq: |
||||
return None |
||||
return (x, y if y & 1 == 0 else p-y) |
||||
|
||||
def int_from_bytes(b: bytes) -> int: |
||||
return int.from_bytes(b, byteorder="big") |
||||
|
||||
def has_even_y(P: Point) -> bool: |
||||
assert not is_infinite(P) |
||||
return y(P) % 2 == 0 |
||||
|
||||
def schnorr_verify(msg: bytes, pubkey: bytes, sig: bytes) -> bool: |
||||
if len(msg) != 32: |
||||
raise ValueError('The message must be a 32-byte array.') |
||||
if len(pubkey) != 32: |
||||
raise ValueError('The public key must be a 32-byte array.') |
||||
if len(sig) != 64: |
||||
raise ValueError('The signature must be a 64-byte array.') |
||||
P = lift_x(pubkey) |
||||
r = int_from_bytes(sig[0:32]) |
||||
s = int_from_bytes(sig[32:64]) |
||||
if (P is None) or (r >= p) or (s >= n): |
||||
return False |
||||
e = int_from_bytes(tagged_hash("BIP0340/challenge", sig[0:32] + pubkey + msg)) % n |
||||
R = point_add(point_mul(G, s), point_mul(P, n - e)) |
||||
if (R is None) or (not has_even_y(R)) or (x(R) != r): |
||||
return False |
||||
return True |
||||
@ -0,0 +1,22 @@
|
||||
The MIT License (MIT) |
||||
|
||||
Copyright (c) 2009-2024 The Bitcoin Core developers |
||||
Copyright (c) 2009-2024 Bitcoin Developers |
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy |
||||
of this software and associated documentation files (the "Software"), to deal |
||||
in the Software without restriction, including without limitation the rights |
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell |
||||
copies of the Software, and to permit persons to whom the Software is |
||||
furnished to do so, subject to the following conditions: |
||||
|
||||
The above copyright notice and this permission notice shall be included in |
||||
all copies or substantial portions of the Software. |
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE |
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER |
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, |
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN |
||||
THE SOFTWARE. |
||||
@ -0,0 +1,73 @@
|
||||
# The following functions are based on the BIP 340 reference implementation: |
||||
# https://github.com/bitcoin/bips/blob/master/bip-0340/reference.py |
||||
|
||||
from .secp256k1 import FE, GE, G |
||||
from .util import int_from_bytes, bytes_from_int, xor_bytes, tagged_hash |
||||
|
||||
|
||||
def pubkey_gen(seckey: bytes) -> bytes: |
||||
d0 = int_from_bytes(seckey) |
||||
if not (1 <= d0 <= GE.ORDER - 1): |
||||
raise ValueError("The secret key must be an integer in the range 1..n-1.") |
||||
P = d0 * G |
||||
assert not P.infinity |
||||
return P.to_bytes_xonly() |
||||
|
||||
|
||||
def schnorr_sign( |
||||
msg: bytes, seckey: bytes, aux_rand: bytes, tag_prefix: str = "BIP0340" |
||||
) -> bytes: |
||||
d0 = int_from_bytes(seckey) |
||||
if not (1 <= d0 <= GE.ORDER - 1): |
||||
raise ValueError("The secret key must be an integer in the range 1..n-1.") |
||||
if len(aux_rand) != 32: |
||||
raise ValueError("aux_rand must be 32 bytes instead of %i." % len(aux_rand)) |
||||
P = d0 * G |
||||
assert not P.infinity |
||||
d = d0 if P.has_even_y() else GE.ORDER - d0 |
||||
t = xor_bytes(bytes_from_int(d), tagged_hash(tag_prefix + "/aux", aux_rand)) |
||||
k0 = ( |
||||
int_from_bytes(tagged_hash(tag_prefix + "/nonce", t + P.to_bytes_xonly() + msg)) |
||||
% GE.ORDER |
||||
) |
||||
if k0 == 0: |
||||
raise RuntimeError("Failure. This happens only with negligible probability.") |
||||
R = k0 * G |
||||
assert not R.infinity |
||||
k = k0 if R.has_even_y() else GE.ORDER - k0 |
||||
e = ( |
||||
int_from_bytes( |
||||
tagged_hash( |
||||
tag_prefix + "/challenge", R.to_bytes_xonly() + P.to_bytes_xonly() + msg |
||||
) |
||||
) |
||||
% GE.ORDER |
||||
) |
||||
sig = R.to_bytes_xonly() + bytes_from_int((k + e * d) % GE.ORDER) |
||||
assert schnorr_verify(msg, P.to_bytes_xonly(), sig, tag_prefix=tag_prefix) |
||||
return sig |
||||
|
||||
|
||||
def schnorr_verify( |
||||
msg: bytes, pubkey: bytes, sig: bytes, tag_prefix: str = "BIP0340" |
||||
) -> bool: |
||||
if len(pubkey) != 32: |
||||
raise ValueError("The public key must be a 32-byte array.") |
||||
if len(sig) != 64: |
||||
raise ValueError("The signature must be a 64-byte array.") |
||||
try: |
||||
P = GE.lift_x(int_from_bytes(pubkey)) |
||||
except ValueError: |
||||
return False |
||||
r = int_from_bytes(sig[0:32]) |
||||
s = int_from_bytes(sig[32:64]) |
||||
if (r >= FE.SIZE) or (s >= GE.ORDER): |
||||
return False |
||||
e = ( |
||||
int_from_bytes(tagged_hash(tag_prefix + "/challenge", sig[0:32] + pubkey + msg)) |
||||
% GE.ORDER |
||||
) |
||||
R = s * G - e * P |
||||
if R.infinity or (not R.has_even_y()) or (R.x != r): |
||||
return False |
||||
return True |
||||
@ -0,0 +1,16 @@
|
||||
import hashlib |
||||
|
||||
from .secp256k1 import GE, Scalar |
||||
|
||||
|
||||
def ecdh_compressed_in_raw_out(seckey: bytes, pubkey: bytes) -> GE: |
||||
"""TODO""" |
||||
shared_secret = Scalar.from_bytes(seckey) * GE.from_bytes_compressed(pubkey) |
||||
assert not shared_secret.infinity # prime-order group |
||||
return shared_secret |
||||
|
||||
|
||||
def ecdh_libsecp256k1(seckey: bytes, pubkey: bytes) -> bytes: |
||||
"""TODO""" |
||||
shared_secret = ecdh_compressed_in_raw_out(seckey, pubkey) |
||||
return hashlib.sha256(shared_secret.to_bytes_compressed()).digest() |
||||
@ -0,0 +1,15 @@
|
||||
from .secp256k1 import GE, G |
||||
from .util import int_from_bytes |
||||
|
||||
# The following function is based on the BIP 327 reference implementation |
||||
# https://github.com/bitcoin/bips/blob/master/bip-0327/reference.py |
||||
|
||||
|
||||
# Return the plain public key corresponding to a given secret key |
||||
def pubkey_gen_plain(seckey: bytes) -> bytes: |
||||
d0 = int_from_bytes(seckey) |
||||
if not (1 <= d0 <= GE.ORDER - 1): |
||||
raise ValueError("The secret key must be an integer in the range 1..n-1.") |
||||
P = d0 * G |
||||
assert not P.infinity |
||||
return P.to_bytes_compressed() |
||||
@ -0,0 +1,438 @@
|
||||
# Copyright (c) 2022-2023 The Bitcoin Core developers |
||||
# Distributed under the MIT software license, see the accompanying |
||||
# file COPYING or http://www.opensource.org/licenses/mit-license.php. |
||||
|
||||
"""Test-only implementation of low-level secp256k1 field and group arithmetic |
||||
|
||||
It is designed for ease of understanding, not performance. |
||||
|
||||
WARNING: This code is slow and trivially vulnerable to side channel attacks. Do not use for |
||||
anything but tests. |
||||
|
||||
Exports: |
||||
* FE: class for secp256k1 field elements |
||||
* GE: class for secp256k1 group elements |
||||
* G: the secp256k1 generator point |
||||
""" |
||||
|
||||
# TODO Docstrings of methods still say "field element" |
||||
class APrimeFE: |
||||
"""Objects of this class represent elements of a prime field. |
||||
|
||||
They are represented internally in numerator / denominator form, in order to delay inversions. |
||||
""" |
||||
|
||||
# The size of the field (also its modulus and characteristic). |
||||
SIZE: int |
||||
|
||||
def __init__(self, a=0, b=1): |
||||
"""Initialize a field element a/b; both a and b can be ints or field elements.""" |
||||
if isinstance(a, type(self)): |
||||
num = a._num |
||||
den = a._den |
||||
else: |
||||
num = a % self.SIZE |
||||
den = 1 |
||||
if isinstance(b, type(self)): |
||||
den = (den * b._num) % self.SIZE |
||||
num = (num * b._den) % self.SIZE |
||||
else: |
||||
den = (den * b) % self.SIZE |
||||
assert den != 0 |
||||
if num == 0: |
||||
den = 1 |
||||
self._num = num |
||||
self._den = den |
||||
|
||||
def __add__(self, a): |
||||
"""Compute the sum of two field elements (second may be int).""" |
||||
if isinstance(a, type(self)): |
||||
return type(self)(self._num * a._den + self._den * a._num, self._den * a._den) |
||||
if isinstance(a, int): |
||||
return type(self)(self._num + self._den * a, self._den) |
||||
return NotImplemented |
||||
|
||||
def __radd__(self, a): |
||||
"""Compute the sum of an integer and a field element.""" |
||||
return type(self)(a) + self |
||||
|
||||
@classmethod |
||||
# REVIEW This should be |
||||
# def sum(cls, *es: Iterable[Self]) -> Self: |
||||
# but Self needs the typing_extension package on Python <= 3.12. |
||||
def sum(cls, *es): |
||||
"""Compute the sum of field elements. |
||||
|
||||
sum(a, b, c, ...) is identical to (0 + a + b + c + ...).""" |
||||
return sum(es, start=cls(0)) |
||||
|
||||
def __sub__(self, a): |
||||
"""Compute the difference of two field elements (second may be int).""" |
||||
if isinstance(a, type(self)): |
||||
return type(self)(self._num * a._den - self._den * a._num, self._den * a._den) |
||||
if isinstance(a, int): |
||||
return type(self)(self._num - self._den * a, self._den) |
||||
return NotImplemented |
||||
|
||||
def __rsub__(self, a): |
||||
"""Compute the difference of an integer and a field element.""" |
||||
return type(self)(a) - self |
||||
|
||||
def __mul__(self, a): |
||||
"""Compute the product of two field elements (second may be int).""" |
||||
if isinstance(a, type(self)): |
||||
return type(self)(self._num * a._num, self._den * a._den) |
||||
if isinstance(a, int): |
||||
return type(self)(self._num * a, self._den) |
||||
return NotImplemented |
||||
|
||||
def __rmul__(self, a): |
||||
"""Compute the product of an integer with a field element.""" |
||||
return type(self)(a) * self |
||||
|
||||
def __truediv__(self, a): |
||||
"""Compute the ratio of two field elements (second may be int).""" |
||||
if isinstance(a, type(self)) or isinstance(a, int): |
||||
return type(self)(self, a) |
||||
return NotImplemented |
||||
|
||||
def __pow__(self, a): |
||||
"""Raise a field element to an integer power.""" |
||||
return type(self)(pow(self._num, a, self.SIZE), pow(self._den, a, self.SIZE)) |
||||
|
||||
def __neg__(self): |
||||
"""Negate a field element.""" |
||||
return type(self)(-self._num, self._den) |
||||
|
||||
def __int__(self): |
||||
"""Convert a field element to an integer in range 0..SIZE-1. The result is cached.""" |
||||
if self._den != 1: |
||||
self._num = (self._num * pow(self._den, -1, self.SIZE)) % self.SIZE |
||||
self._den = 1 |
||||
return self._num |
||||
|
||||
def sqrt(self): |
||||
"""Compute the square root of a field element if it exists (None otherwise).""" |
||||
raise NotImplementedError |
||||
|
||||
def is_square(self): |
||||
"""Determine if this field element has a square root.""" |
||||
# A more efficient algorithm is possible here (Jacobi symbol). |
||||
return self.sqrt() is not None |
||||
|
||||
def is_even(self): |
||||
"""Determine whether this field element, represented as integer in 0..SIZE-1, is even.""" |
||||
return int(self) & 1 == 0 |
||||
|
||||
def __eq__(self, a): |
||||
"""Check whether two field elements are equal (second may be an int).""" |
||||
if isinstance(a, type(self)): |
||||
return (self._num * a._den - self._den * a._num) % self.SIZE == 0 |
||||
return (self._num - self._den * a) % self.SIZE == 0 |
||||
|
||||
def to_bytes(self): |
||||
"""Convert a field element to a 32-byte array (BE byte order).""" |
||||
return int(self).to_bytes(32, 'big') |
||||
|
||||
@classmethod |
||||
def from_bytes(cls, b): |
||||
"""Convert a 32-byte array to a field element (BE byte order, no overflow allowed).""" |
||||
v = int.from_bytes(b, 'big') |
||||
if v >= cls.SIZE: |
||||
raise ValueError |
||||
return cls(v) |
||||
|
||||
def __str__(self): |
||||
"""Convert this field element to a 64 character hex string.""" |
||||
return f"{int(self):064x}" |
||||
|
||||
def __repr__(self): |
||||
"""Get a string representation of this field element.""" |
||||
return f"{type(self).__qualname__}(0x{int(self):x})" |
||||
|
||||
|
||||
class FE(APrimeFE): |
||||
SIZE = 2**256 - 2**32 - 977 |
||||
|
||||
def sqrt(self): |
||||
# Due to the fact that our modulus p is of the form (p % 4) == 3, the Tonelli-Shanks |
||||
# algorithm (https://en.wikipedia.org/wiki/Tonelli-Shanks_algorithm) is simply |
||||
# raising the argument to the power (p + 1) / 4. |
||||
|
||||
# To see why: (p-1) % 2 = 0, so 2 divides the order of the multiplicative group, |
||||
# and thus only half of the non-zero field elements are squares. An element a is |
||||
# a (nonzero) square when Euler's criterion, a^((p-1)/2) = 1 (mod p), holds. We're |
||||
# looking for x such that x^2 = a (mod p). Given a^((p-1)/2) = 1, that is equivalent |
||||
# to x^2 = a^(1 + (p-1)/2) mod p. As (1 + (p-1)/2) is even, this is equivalent to |
||||
# x = a^((1 + (p-1)/2)/2) mod p, or x = a^((p+1)/4) mod p. |
||||
v = int(self) |
||||
s = pow(v, (self.SIZE + 1) // 4, self.SIZE) |
||||
if s**2 % self.SIZE == v: |
||||
return type(self)(s) |
||||
return None |
||||
|
||||
|
||||
class Scalar(APrimeFE): |
||||
"""TODO Docstring""" |
||||
SIZE = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141 |
||||
|
||||
|
||||
class GE: |
||||
"""Objects of this class represent secp256k1 group elements (curve points or infinity) |
||||
|
||||
GE objects are immutable. |
||||
|
||||
Normal points on the curve have fields: |
||||
* x: the x coordinate (a field element) |
||||
* y: the y coordinate (a field element, satisfying y^2 = x^3 + 7) |
||||
* infinity: False |
||||
|
||||
The point at infinity has field: |
||||
* infinity: True |
||||
""" |
||||
|
||||
# TODO The following two class attributes should probably be just getters as |
||||
# classmethods to enforce immutability. Unfortunately Python makes it hard |
||||
# to create "classproperties". `G` could then also be just a classmethod. |
||||
|
||||
# Order of the group (number of points on the curve, plus 1 for infinity) |
||||
ORDER = Scalar.SIZE |
||||
|
||||
# Number of valid distinct x coordinates on the curve. |
||||
ORDER_HALF = ORDER // 2 |
||||
|
||||
@property |
||||
def infinity(self): |
||||
"""Whether the group element is the point at infinity.""" |
||||
return self._infinity |
||||
|
||||
@property |
||||
def x(self): |
||||
"""The x coordinate (a field element) of a non-infinite group element.""" |
||||
assert not self.infinity |
||||
return self._x |
||||
|
||||
@property |
||||
def y(self): |
||||
"""The y coordinate (a field element) of a non-infinite group element.""" |
||||
assert not self.infinity |
||||
return self._y |
||||
|
||||
def __init__(self, x=None, y=None): |
||||
"""Initialize a group element with specified x and y coordinates, or infinity.""" |
||||
if x is None: |
||||
# Initialize as infinity. |
||||
assert y is None |
||||
self._infinity = True |
||||
else: |
||||
# Initialize as point on the curve (and check that it is). |
||||
fx = FE(x) |
||||
fy = FE(y) |
||||
assert fy**2 == fx**3 + 7 |
||||
self._infinity = False |
||||
self._x = fx |
||||
self._y = fy |
||||
|
||||
def __add__(self, a): |
||||
"""Add two group elements together.""" |
||||
# Deal with infinity: a + infinity == infinity + a == a. |
||||
if self.infinity: |
||||
return a |
||||
if a.infinity: |
||||
return self |
||||
if self.x == a.x: |
||||
if self.y != a.y: |
||||
# A point added to its own negation is infinity. |
||||
assert self.y + a.y == 0 |
||||
return GE() |
||||
else: |
||||
# For identical inputs, use the tangent (doubling formula). |
||||
lam = (3 * self.x**2) / (2 * self.y) |
||||
else: |
||||
# For distinct inputs, use the line through both points (adding formula). |
||||
lam = (self.y - a.y) / (self.x - a.x) |
||||
# Determine point opposite to the intersection of that line with the curve. |
||||
x = lam**2 - (self.x + a.x) |
||||
y = lam * (self.x - x) - self.y |
||||
return GE(x, y) |
||||
|
||||
@staticmethod |
||||
def sum(*ps): |
||||
"""Compute the sum of group elements. |
||||
|
||||
GE.sum(a, b, c, ...) is identical to (GE() + a + b + c + ...).""" |
||||
return sum(ps, start=GE()) |
||||
|
||||
@staticmethod |
||||
def batch_mul(*aps): |
||||
"""Compute a (batch) scalar group element multiplication. |
||||
|
||||
GE.batch_mul((a1, p1), (a2, p2), (a3, p3)) is identical to a1*p1 + a2*p2 + a3*p3, |
||||
but more efficient.""" |
||||
# Reduce all the scalars modulo order first (so we can deal with negatives etc). |
||||
naps = [(int(a), p) for a, p in aps] |
||||
# Start with point at infinity. |
||||
r = GE() |
||||
# Iterate over all bit positions, from high to low. |
||||
for i in range(255, -1, -1): |
||||
# Double what we have so far. |
||||
r = r + r |
||||
# Add then add the points for which the corresponding scalar bit is set. |
||||
for (a, p) in naps: |
||||
if (a >> i) & 1: |
||||
r += p |
||||
return r |
||||
|
||||
def __rmul__(self, a): |
||||
"""Multiply an integer with a group element.""" |
||||
if self == G: |
||||
return FAST_G.mul(Scalar(a)) |
||||
return GE.batch_mul((Scalar(a), self)) |
||||
|
||||
def __neg__(self): |
||||
"""Compute the negation of a group element.""" |
||||
if self.infinity: |
||||
return self |
||||
return GE(self.x, -self.y) |
||||
|
||||
def __sub__(self, a): |
||||
"""Subtract a group element from another.""" |
||||
return self + (-a) |
||||
|
||||
def __eq__(self, a): |
||||
"""Check if two group elements are equal.""" |
||||
return (self - a).infinity |
||||
|
||||
def has_even_y(self): |
||||
"""Determine whether a non-infinity group element has an even y coordinate.""" |
||||
assert not self.infinity |
||||
return self.y.is_even() |
||||
|
||||
def to_bytes_compressed(self): |
||||
"""Convert a non-infinite group element to 33-byte compressed encoding.""" |
||||
assert not self.infinity |
||||
return bytes([3 - self.y.is_even()]) + self.x.to_bytes() |
||||
|
||||
def to_bytes_compressed_with_infinity(self): |
||||
"""Convert a group element to 33-byte compressed encoding, mapping infinity to zeros.""" |
||||
if self.infinity: |
||||
return 33 * b"\x00" |
||||
return self.to_bytes_compressed() |
||||
|
||||
def to_bytes_uncompressed(self): |
||||
"""Convert a non-infinite group element to 65-byte uncompressed encoding.""" |
||||
assert not self.infinity |
||||
return b'\x04' + self.x.to_bytes() + self.y.to_bytes() |
||||
|
||||
def to_bytes_xonly(self): |
||||
"""Convert (the x coordinate of) a non-infinite group element to 32-byte xonly encoding.""" |
||||
assert not self.infinity |
||||
return self.x.to_bytes() |
||||
|
||||
@staticmethod |
||||
def lift_x(x): |
||||
"""Return group element with specified field element as x coordinate (and even y).""" |
||||
y = (FE(x)**3 + 7).sqrt() |
||||
if y is None: |
||||
raise ValueError |
||||
if not y.is_even(): |
||||
y = -y |
||||
return GE(x, y) |
||||
|
||||
@staticmethod |
||||
def from_bytes_compressed(b): |
||||
"""Convert a compressed to a group element.""" |
||||
assert len(b) == 33 |
||||
if b[0] != 2 and b[0] != 3: |
||||
raise ValueError |
||||
x = FE.from_bytes(b[1:]) |
||||
r = GE.lift_x(x) |
||||
if b[0] == 3: |
||||
r = -r |
||||
return r |
||||
|
||||
@staticmethod |
||||
def from_bytes_uncompressed(b): |
||||
"""Convert an uncompressed to a group element.""" |
||||
assert len(b) == 65 |
||||
if b[0] != 4: |
||||
raise ValueError |
||||
x = FE.from_bytes(b[1:33]) |
||||
y = FE.from_bytes(b[33:]) |
||||
if y**2 != x**3 + 7: |
||||
raise ValueError |
||||
return GE(x, y) |
||||
|
||||
@staticmethod |
||||
def from_bytes(b): |
||||
"""Convert a compressed or uncompressed encoding to a group element.""" |
||||
assert len(b) in (33, 65) |
||||
if len(b) == 33: |
||||
return GE.from_bytes_compressed(b) |
||||
else: |
||||
return GE.from_bytes_uncompressed(b) |
||||
|
||||
@staticmethod |
||||
def from_bytes_xonly(b): |
||||
"""Convert a point given in xonly encoding to a group element.""" |
||||
assert len(b) == 32 |
||||
x = FE.from_bytes(b) |
||||
r = GE.lift_x(x) |
||||
return r |
||||
|
||||
@staticmethod |
||||
def is_valid_x(x): |
||||
"""Determine whether the provided field element is a valid X coordinate.""" |
||||
return (FE(x)**3 + 7).is_square() |
||||
|
||||
def __str__(self): |
||||
"""Convert this group element to a string.""" |
||||
if self.infinity: |
||||
return "(inf)" |
||||
return f"({self.x},{self.y})" |
||||
|
||||
def __repr__(self): |
||||
"""Get a string representation for this group element.""" |
||||
if self.infinity: |
||||
return "GE()" |
||||
return f"GE(0x{int(self.x):x},0x{int(self.y):x})" |
||||
|
||||
def __hash__(self): |
||||
"""Compute a non-cryptographic hash of the group element.""" |
||||
if self.infinity: |
||||
return 0 # 0 is not a valid x coordinate |
||||
return int(self.x) |
||||
|
||||
|
||||
# The secp256k1 generator point |
||||
G = GE.lift_x(0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798) |
||||
|
||||
|
||||
class FastGEMul: |
||||
"""Table for fast multiplication with a constant group element. |
||||
|
||||
Speed up scalar multiplication with a fixed point P by using a precomputed lookup table with |
||||
its powers of 2: |
||||
|
||||
table = [P, 2*P, 4*P, (2^3)*P, (2^4)*P, ..., (2^255)*P] |
||||
|
||||
During multiplication, the points corresponding to each bit set in the scalar are added up, |
||||
i.e. on average ~128 point additions take place. |
||||
""" |
||||
|
||||
def __init__(self, p): |
||||
self.table = [p] # table[i] = (2^i) * p |
||||
for _ in range(255): |
||||
p = p + p |
||||
self.table.append(p) |
||||
|
||||
def mul(self, a): |
||||
result = GE() |
||||
a = int(a) |
||||
for bit in range(a.bit_length()): |
||||
if a & (1 << bit): |
||||
result += self.table[bit] |
||||
return result |
||||
|
||||
# Precomputed table with multiples of G for fast multiplication |
||||
FAST_G = FastGEMul(G) |
||||
@ -0,0 +1,24 @@
|
||||
import hashlib |
||||
|
||||
|
||||
# This implementation can be sped up by storing the midstate after hashing |
||||
# tag_hash instead of rehashing it all the time. |
||||
def tagged_hash(tag: str, msg: bytes) -> bytes: |
||||
tag_hash = hashlib.sha256(tag.encode()).digest() |
||||
return hashlib.sha256(tag_hash + tag_hash + msg).digest() |
||||
|
||||
|
||||
def bytes_from_int(x: int) -> bytes: |
||||
return x.to_bytes(32, byteorder="big") |
||||
|
||||
|
||||
def xor_bytes(b0: bytes, b1: bytes) -> bytes: |
||||
return bytes(x ^ y for (x, y) in zip(b0, b1)) |
||||
|
||||
|
||||
def int_from_bytes(b: bytes) -> int: |
||||
return int.from_bytes(b, byteorder="big") |
||||
|
||||
|
||||
def hash_sha256(b: bytes) -> bytes: |
||||
return hashlib.sha256(b).digest() |
||||
@ -0,0 +1,301 @@
|
||||
#!/usr/bin/env python3 |
||||
|
||||
"""Example of a full ChillDKG session""" |
||||
|
||||
from typing import Tuple, List, Optional |
||||
import asyncio |
||||
import pprint |
||||
from random import randint |
||||
from secrets import token_bytes as random_bytes |
||||
import sys |
||||
import argparse |
||||
|
||||
from jmfrost.chilldkg_ref.chilldkg import ( |
||||
params_id, |
||||
hostpubkey_gen, |
||||
participant_step1, |
||||
participant_step2, |
||||
participant_finalize, |
||||
participant_investigate, |
||||
coordinator_step1, |
||||
coordinator_finalize, |
||||
coordinator_investigate, |
||||
SessionParams, |
||||
DKGOutput, |
||||
RecoveryData, |
||||
FaultyParticipantOrCoordinatorError, |
||||
UnknownFaultyParticipantOrCoordinatorError, |
||||
) |
||||
|
||||
# |
||||
# Network mocks to simulate full DKG sessions |
||||
# |
||||
|
||||
|
||||
class CoordinatorChannels: |
||||
def __init__(self, n): |
||||
self.n = n |
||||
self.queues = [] |
||||
for i in range(n): |
||||
self.queues += [asyncio.Queue()] |
||||
|
||||
def set_participant_queues(self, participant_queues): |
||||
self.participant_queues = participant_queues |
||||
|
||||
def send_to(self, i, m): |
||||
assert self.participant_queues is not None |
||||
self.participant_queues[i].put_nowait(m) |
||||
|
||||
def send_all(self, m): |
||||
assert self.participant_queues is not None |
||||
for i in range(self.n): |
||||
self.participant_queues[i].put_nowait(m) |
||||
|
||||
async def receive_from(self, i): |
||||
item = await self.queues[i].get() |
||||
return item |
||||
|
||||
|
||||
class ParticipantChannel: |
||||
def __init__(self, coord_queue): |
||||
self.queue = asyncio.Queue() |
||||
self.coord_queue = coord_queue |
||||
|
||||
# Send m to coordinator |
||||
def send(self, m): |
||||
self.coord_queue.put_nowait(m) |
||||
|
||||
async def receive(self): |
||||
item = await self.queue.get() |
||||
return item |
||||
|
||||
|
||||
# |
||||
# Helper functions |
||||
# |
||||
|
||||
|
||||
def pphex(thing): |
||||
"""Pretty print an object with bytes as hex strings""" |
||||
|
||||
def hexlify(thing): |
||||
if isinstance(thing, bytes): |
||||
return thing.hex() |
||||
if isinstance(thing, dict): |
||||
return {k: hexlify(v) for k, v in thing.items()} |
||||
if hasattr(thing, "_asdict"): # NamedTuple |
||||
return hexlify(thing._asdict()) |
||||
if isinstance(thing, List): |
||||
return [hexlify(v) for v in thing] |
||||
return thing |
||||
|
||||
pprint.pp(hexlify(thing)) |
||||
|
||||
|
||||
# |
||||
# Protocol parties |
||||
# |
||||
|
||||
|
||||
async def participant( |
||||
chan: ParticipantChannel, |
||||
hostseckey: bytes, |
||||
params: SessionParams, |
||||
investigation_procedure: bool, |
||||
) -> Tuple[DKGOutput, RecoveryData]: |
||||
# TODO Top-level error handling |
||||
random = random_bytes(32) |
||||
state1, pmsg1 = participant_step1(hostseckey, params, random) |
||||
|
||||
chan.send(pmsg1) |
||||
cmsg1 = await chan.receive() |
||||
|
||||
# Participants can implement an optional investigation procedure. This |
||||
# allows the participant to determine which participant is faulty when an |
||||
# `UnknownFaultyParticipantOrCoordinatorError` is raised. The investiation |
||||
# procedure requires the participant to receive an extra "investigation |
||||
# message" from the coordinator that contains necessary information. |
||||
# |
||||
# In this example, if the investigation procedure is enabled, the |
||||
# participant expects the coordinator to send a investigation message. |
||||
# Alternatively, an implementation of the participant can explicitly request |
||||
# the investigation message only if participant_step2 fails. |
||||
if investigation_procedure: |
||||
cinv = await chan.receive() |
||||
|
||||
try: |
||||
state2, eq_round1 = participant_step2(hostseckey, state1, cmsg1) |
||||
except UnknownFaultyParticipantOrCoordinatorError as e: |
||||
if investigation_procedure: |
||||
participant_investigate(e, cinv) |
||||
else: |
||||
# If this participant does not implement the investigation |
||||
# procedure, it cannot determine which party is faulty. Re-raise |
||||
# UnknownFaultyPartyError in this case. |
||||
raise |
||||
|
||||
chan.send(eq_round1) |
||||
cmsg2 = await chan.receive() |
||||
|
||||
return participant_finalize(state2, cmsg2) |
||||
|
||||
|
||||
async def coordinator( |
||||
chans: CoordinatorChannels, params: SessionParams, investigation_procedure: bool |
||||
) -> Tuple[DKGOutput, RecoveryData]: |
||||
(hostpubkeys, t) = params |
||||
n = len(hostpubkeys) |
||||
|
||||
pmsgs1 = [] |
||||
for i in range(n): |
||||
pmsgs1.append(await chans.receive_from(i)) |
||||
state, cmsg1 = coordinator_step1(pmsgs1, params) |
||||
chans.send_all(cmsg1) |
||||
|
||||
# If the coordinator implements the investigation procedure and it is |
||||
# enabled, it sends an extra message to the participants. |
||||
if investigation_procedure: |
||||
inv_msgs = coordinator_investigate(pmsgs1) |
||||
for i in range(n): |
||||
chans.send_to(i, inv_msgs[i]) |
||||
|
||||
sigs = [] |
||||
for i in range(n): |
||||
sigs += [await chans.receive_from(i)] |
||||
cmsg2, dkg_output, recovery_data = coordinator_finalize(state, sigs) |
||||
chans.send_all(cmsg2) |
||||
|
||||
return dkg_output, recovery_data |
||||
|
||||
|
||||
# |
||||
# DKG Session |
||||
# |
||||
|
||||
|
||||
# This is a dummy participant used to demonstrate the investigation procedure. |
||||
# It picks a random victim participant and sends an invalid share to it. |
||||
async def faulty_participant( |
||||
chan: ParticipantChannel, hostseckey: bytes, params: SessionParams, idx: int |
||||
): |
||||
random = random_bytes(32) |
||||
_, pmsg1 = participant_step1(hostseckey, params, random) |
||||
|
||||
n = len(pmsg1.enc_pmsg.enc_shares) |
||||
# Pick random victim that is not this participant |
||||
victim = (idx + randint(1, n - 1)) % n |
||||
pmsg1.enc_pmsg.enc_shares[victim] += 17 |
||||
|
||||
chan.send(pmsg1) |
||||
|
||||
|
||||
def simulate_chilldkg_full( |
||||
hostseckeys: List[bytes], params: SessionParams, faulty_idx: Optional[int] |
||||
) -> List[Optional[Tuple[DKGOutput, RecoveryData]]]: |
||||
n = len(hostseckeys) |
||||
assert n == len(params.hostpubkeys) |
||||
|
||||
# For demonstration purposes, we enable the investigation pro if a participant is |
||||
# faulty. |
||||
investigation_procedure = faulty_idx is not None |
||||
|
||||
async def session(): |
||||
coord_chans = CoordinatorChannels(n) |
||||
participant_chans = [ |
||||
ParticipantChannel(coord_chans.queues[i]) for i in range(n) |
||||
] |
||||
coord_chans.set_participant_queues( |
||||
[participant_chans[i].queue for i in range(n)] |
||||
) |
||||
coroutines = [coordinator(coord_chans, params, investigation_procedure)] + [ |
||||
participant( |
||||
participant_chans[i], hostseckeys[i], params, investigation_procedure |
||||
) |
||||
if i != faulty_idx |
||||
else faulty_participant(participant_chans[i], hostseckeys[i], params, i) |
||||
for i in range(n) |
||||
] |
||||
return await asyncio.gather(*coroutines) |
||||
|
||||
outputs = asyncio.run(session()) |
||||
return outputs |
||||
|
||||
|
||||
def main(): |
||||
parser = argparse.ArgumentParser(description="ChillDKG example") |
||||
parser.add_argument( |
||||
"--faulty-participant", |
||||
action="store_true", |
||||
help="When this flag is set, one random participant will send an invalid message, and the investigation procedure will be enabled for other participants and the coordinator.", |
||||
) |
||||
parser.add_argument( |
||||
"t", nargs="?", type=int, default=2, help="Signing threshold [default = 2]" |
||||
) |
||||
parser.add_argument( |
||||
"n", nargs="?", type=int, default=3, help="Number of participants [default = 3]" |
||||
) |
||||
args = parser.parse_args() |
||||
t = args.t |
||||
n = args.n |
||||
if args.faulty_participant: |
||||
faulty_idx = randint(0, n - 1) |
||||
else: |
||||
faulty_idx = None |
||||
|
||||
print("====== ChillDKG example session ======") |
||||
print(f"Using n = {n} participants and a threshold of t = {t}.") |
||||
if faulty_idx is not None: |
||||
print(f"Participant {faulty_idx} is faulty.") |
||||
print() |
||||
|
||||
# Generate common inputs for all participants and coordinator |
||||
hostseckeys = [random_bytes(32) for _ in range(n)] |
||||
hostpubkeys = [] |
||||
for i in range(n): |
||||
hostpubkeys += [hostpubkey_gen(hostseckeys[i])] |
||||
params = SessionParams(hostpubkeys, t) |
||||
|
||||
print("=== Host secret keys ===") |
||||
pphex(hostseckeys) |
||||
print() |
||||
|
||||
print("=== Session parameters ===") |
||||
pphex(params) |
||||
print() |
||||
print(f"Session parameters identifier: {params_id(params).hex()}") |
||||
print() |
||||
|
||||
try: |
||||
rets = simulate_chilldkg_full(hostseckeys, params, faulty_idx) |
||||
except FaultyParticipantOrCoordinatorError as e: |
||||
print( |
||||
f"A participant has failed and is blaming either participant {e.participant} or the coordinator." |
||||
) |
||||
# If the blamed participant is the faulty participant, exit with code 0. |
||||
# Otherwise, re-raise the exception. |
||||
if faulty_idx == e.participant: |
||||
return 0 |
||||
else: |
||||
raise |
||||
|
||||
assert len(rets) == n + 1 |
||||
print("=== Coordinator's DKG output ===") |
||||
dkg_output, _ = rets[0] |
||||
pphex(dkg_output) |
||||
print() |
||||
|
||||
for i in range(n): |
||||
print(f"=== Participant {i}'s DKG output ===") |
||||
dkg_output, _ = rets[i + 1] |
||||
pphex(dkg_output) |
||||
print() |
||||
|
||||
# Check that all RecoveryData of all parties is identical |
||||
assert len(set([rets[i][1] for i in range(n + 1)])) == 1 |
||||
recovery_data = rets[0][1] |
||||
print(f"=== Common recovery data ({len(recovery_data)} bytes) ===") |
||||
print(recovery_data.hex()) |
||||
|
||||
|
||||
if __name__ == "__main__": |
||||
sys.exit(main()) |
||||
@ -0,0 +1,395 @@
|
||||
#!/usr/bin/env python3 |
||||
|
||||
"""Tests for ChillDKG reference implementation""" |
||||
|
||||
import pytest |
||||
from itertools import combinations |
||||
from random import randint |
||||
from typing import Tuple, List, Optional |
||||
from secrets import token_bytes as random_bytes |
||||
|
||||
from jmfrost.secp256k1proto.secp256k1 import GE, G, Scalar |
||||
from jmfrost.secp256k1proto.keys import pubkey_gen_plain |
||||
|
||||
from jmfrost.chilldkg_ref.util import ( |
||||
FaultyParticipantOrCoordinatorError, |
||||
FaultyCoordinatorError, |
||||
UnknownFaultyParticipantOrCoordinatorError, |
||||
tagged_hash_bip_dkg, |
||||
) |
||||
from jmfrost.chilldkg_ref.vss import Polynomial, VSS, VSSCommitment |
||||
import jmfrost.chilldkg_ref.simplpedpop as simplpedpop |
||||
import jmfrost.chilldkg_ref.encpedpop as encpedpop |
||||
import jmfrost.chilldkg_ref.chilldkg as chilldkg |
||||
|
||||
from chilldkg_example import ( |
||||
simulate_chilldkg_full as simulate_chilldkg_full_example) |
||||
|
||||
|
||||
def test_chilldkg_params_validate(): |
||||
hostseckeys = [random_bytes(32) for _ in range(3)] |
||||
hostpubkeys = [chilldkg.hostpubkey_gen(hostseckey) for hostseckey in hostseckeys] |
||||
|
||||
with_duplicate = [hostpubkeys[0], hostpubkeys[1], hostpubkeys[2], hostpubkeys[1]] |
||||
params_with_duplicate = chilldkg.SessionParams(with_duplicate, 2) |
||||
try: |
||||
_ = chilldkg.params_id(params_with_duplicate) |
||||
except chilldkg.DuplicateHostPubkeyError as e: |
||||
assert {e.participant1, e.participant2} == {1, 3} |
||||
else: |
||||
assert False, "Expected exception" |
||||
|
||||
invalid_hostpubkey = b"\x03" + 31 * b"\x00" + b"\x05" # Invalid x-coordinate |
||||
params_with_invalid = chilldkg.SessionParams( |
||||
[hostpubkeys[1], invalid_hostpubkey, hostpubkeys[2]], 1 |
||||
) |
||||
try: |
||||
_ = chilldkg.params_id(params_with_invalid) |
||||
except chilldkg.InvalidHostPubkeyError as e: |
||||
assert e.participant == 1 |
||||
pass |
||||
else: |
||||
assert False, "Expected exception" |
||||
|
||||
try: |
||||
_ = chilldkg.params_id( |
||||
chilldkg.SessionParams(hostpubkeys, len(hostpubkeys) + 1) |
||||
) |
||||
except chilldkg.ThresholdOrCountError: |
||||
pass |
||||
else: |
||||
assert False, "Expected exception" |
||||
|
||||
try: |
||||
_ = chilldkg.params_id(chilldkg.SessionParams(hostpubkeys, -2)) |
||||
except chilldkg.ThresholdOrCountError: |
||||
pass |
||||
else: |
||||
assert False, "Expected exception" |
||||
|
||||
|
||||
def test_vss_correctness(): |
||||
def rand_polynomial(t): |
||||
return Polynomial([randint(1, GE.ORDER - 1) for _ in range(1, t + 1)]) |
||||
|
||||
for t in range(1, 3): |
||||
for n in range(t, 2 * t + 1): |
||||
f = rand_polynomial(t) |
||||
vss = VSS(f) |
||||
secshares = vss.secshares(n) |
||||
assert len(secshares) == n |
||||
assert all( |
||||
VSSCommitment.verify_secshare(secshares[i], vss.commit().pubshare(i)) |
||||
for i in range(n) |
||||
) |
||||
|
||||
|
||||
def simulate_simplpedpop( |
||||
seeds, t, investigation: bool |
||||
) -> Optional[List[Tuple[simplpedpop.DKGOutput, bytes]]]: |
||||
n = len(seeds) |
||||
prets = [] |
||||
for i in range(n): |
||||
prets += [simplpedpop.participant_step1(seeds[i], t, n, i)] |
||||
|
||||
pstates = [pstate for (pstate, _, _) in prets] |
||||
pmsgs = [pmsg for (_, pmsg, _) in prets] |
||||
|
||||
cmsg, cout, ceq = simplpedpop.coordinator_step(pmsgs, t, n) |
||||
pre_finalize_rets = [(cout, ceq)] |
||||
for i in range(n): |
||||
partial_secshares = [ |
||||
partial_secshares_for[i] for (_, _, partial_secshares_for) in prets |
||||
] |
||||
if investigation: |
||||
# Let a random participant send incorrect shares to participant i. |
||||
faulty_idx = randint(0, n - 1) |
||||
partial_secshares[faulty_idx] += Scalar(17) |
||||
|
||||
secshare = simplpedpop.participant_step2_prepare_secshare(partial_secshares) |
||||
try: |
||||
pre_finalize_rets += [ |
||||
simplpedpop.participant_step2(pstates[i], cmsg, secshare) |
||||
] |
||||
except UnknownFaultyParticipantOrCoordinatorError as e: |
||||
if not investigation: |
||||
raise |
||||
inv_msgs = simplpedpop.coordinator_investigate(pmsgs) |
||||
assert len(inv_msgs) == len(pmsgs) |
||||
try: |
||||
simplpedpop.participant_investigate(e, inv_msgs[i], partial_secshares) |
||||
# If we're not faulty, we should blame the faulty party. |
||||
except FaultyParticipantOrCoordinatorError as e: |
||||
assert i != faulty_idx |
||||
assert e.participant == faulty_idx |
||||
# If we're faulty, we'll blame the coordinator. |
||||
except FaultyCoordinatorError: |
||||
assert i == faulty_idx |
||||
return None |
||||
return pre_finalize_rets |
||||
|
||||
|
||||
def encpedpop_keys(seed: bytes) -> Tuple[bytes, bytes]: |
||||
deckey = tagged_hash_bip_dkg("encpedpop deckey", seed) |
||||
enckey = pubkey_gen_plain(deckey) |
||||
return deckey, enckey |
||||
|
||||
|
||||
def simulate_encpedpop( |
||||
seeds, t, investigation: bool |
||||
) -> Optional[List[Tuple[simplpedpop.DKGOutput, bytes]]]: |
||||
n = len(seeds) |
||||
enc_prets0 = [] |
||||
enc_prets1 = [] |
||||
for i in range(n): |
||||
enc_prets0 += [encpedpop_keys(seeds[i])] |
||||
|
||||
enckeys = [pret[1] for pret in enc_prets0] |
||||
for i in range(n): |
||||
deckey = enc_prets0[i][0] |
||||
random = random_bytes(32) |
||||
enc_prets1 += [ |
||||
encpedpop.participant_step1(seeds[i], deckey, enckeys, t, i, random) |
||||
] |
||||
|
||||
pstates = [pstate for (pstate, _) in enc_prets1] |
||||
pmsgs = [pmsg for (_, pmsg) in enc_prets1] |
||||
if investigation: |
||||
faulty_idx: List[int] = [] |
||||
for i in range(n): |
||||
# Let a random participant faulty_idx[i] send incorrect shares to i. |
||||
faulty_idx[i:] = [randint(0, n - 1)] |
||||
pmsgs[faulty_idx[i]].enc_shares[i] += Scalar(17) |
||||
|
||||
cmsg, cout, ceq, enc_secshares = encpedpop.coordinator_step(pmsgs, t, enckeys) |
||||
pre_finalize_rets = [(cout, ceq)] |
||||
for i in range(n): |
||||
deckey = enc_prets0[i][0] |
||||
try: |
||||
pre_finalize_rets += [ |
||||
encpedpop.participant_step2(pstates[i], deckey, cmsg, enc_secshares[i]) |
||||
] |
||||
except UnknownFaultyParticipantOrCoordinatorError as e: |
||||
if not investigation: |
||||
raise |
||||
inv_msgs = encpedpop.coordinator_investigate(pmsgs) |
||||
assert len(inv_msgs) == len(pmsgs) |
||||
try: |
||||
encpedpop.participant_investigate(e, inv_msgs[i]) |
||||
# If we're not faulty, we should blame the faulty party. |
||||
except FaultyParticipantOrCoordinatorError as e: |
||||
assert i != faulty_idx[i] |
||||
assert e.participant == faulty_idx[i] |
||||
# If we're faulty, we'll blame the coordinator. |
||||
except FaultyCoordinatorError: |
||||
assert i == faulty_idx[i] |
||||
return None |
||||
return pre_finalize_rets |
||||
|
||||
|
||||
def simulate_chilldkg( |
||||
hostseckeys, t, investigation: bool |
||||
) -> Optional[List[Tuple[chilldkg.DKGOutput, chilldkg.RecoveryData]]]: |
||||
n = len(hostseckeys) |
||||
|
||||
hostpubkeys = [] |
||||
for i in range(n): |
||||
hostpubkeys += [chilldkg.hostpubkey_gen(hostseckeys[i])] |
||||
|
||||
params = chilldkg.SessionParams(hostpubkeys, t) |
||||
|
||||
prets1 = [] |
||||
for i in range(n): |
||||
random = random_bytes(32) |
||||
prets1 += [chilldkg.participant_step1(hostseckeys[i], params, random)] |
||||
|
||||
pstates1 = [pret[0] for pret in prets1] |
||||
pmsgs = [pret[1] for pret in prets1] |
||||
if investigation: |
||||
faulty_idx: List[int] = [] |
||||
for i in range(n): |
||||
# Let a random participant faulty_idx[i] send incorrect shares to i. |
||||
faulty_idx[i:] = [randint(0, n - 1)] |
||||
pmsgs[faulty_idx[i]].enc_pmsg.enc_shares[i] += Scalar(17) |
||||
|
||||
cstate, cmsg1 = chilldkg.coordinator_step1(pmsgs, params) |
||||
|
||||
prets2 = [] |
||||
for i in range(n): |
||||
try: |
||||
prets2 += [chilldkg.participant_step2(hostseckeys[i], pstates1[i], cmsg1)] |
||||
except UnknownFaultyParticipantOrCoordinatorError as e: |
||||
if not investigation: |
||||
raise |
||||
inv_msgs = chilldkg.coordinator_investigate(pmsgs) |
||||
assert len(inv_msgs) == len(pmsgs) |
||||
try: |
||||
chilldkg.participant_investigate(e, inv_msgs[i]) |
||||
# If we're not faulty, we should blame the faulty party. |
||||
except FaultyParticipantOrCoordinatorError as e: |
||||
assert i != faulty_idx[i] |
||||
assert e.participant == faulty_idx[i] |
||||
# If we're faulty, we'll blame the coordinator. |
||||
except FaultyCoordinatorError: |
||||
assert i == faulty_idx[i] |
||||
return None |
||||
|
||||
cmsg2, cout, crec = chilldkg.coordinator_finalize( |
||||
cstate, [pret[1] for pret in prets2] |
||||
) |
||||
outputs = [(cout, crec)] |
||||
for i in range(n): |
||||
out = chilldkg.participant_finalize(prets2[i][0], cmsg2) |
||||
assert out is not None |
||||
outputs += [out] |
||||
|
||||
return outputs |
||||
|
||||
|
||||
def simulate_chilldkg_full( |
||||
hostseckeys, |
||||
t, |
||||
investigation: bool, |
||||
) -> List[Optional[Tuple[chilldkg.DKGOutput, chilldkg.RecoveryData]]]: |
||||
# Investigating is not supported by this wrapper |
||||
assert not investigation |
||||
|
||||
hostpubkeys = [] |
||||
n = len(hostseckeys) |
||||
for i in range(n): |
||||
hostpubkeys += [chilldkg.hostpubkey_gen(hostseckeys[i])] |
||||
params = chilldkg.SessionParams(hostpubkeys, t) |
||||
return simulate_chilldkg_full_example(hostseckeys, params, faulty_idx=None) |
||||
|
||||
|
||||
def derive_interpolating_value(L, x_i): |
||||
assert x_i in L |
||||
assert all(L.count(x_j) <= 1 for x_j in L) |
||||
lam = Scalar(1) |
||||
for x_j in L: |
||||
x_j = Scalar(x_j) |
||||
x_i = Scalar(x_i) |
||||
if x_j == x_i: |
||||
continue |
||||
lam *= x_j / (x_j - x_i) |
||||
return lam |
||||
|
||||
|
||||
def recover_secret(participant_indices, shares) -> Scalar: |
||||
interpolated_shares = [] |
||||
t = len(shares) |
||||
assert len(participant_indices) == t |
||||
for i in range(t): |
||||
lam = derive_interpolating_value(participant_indices, participant_indices[i]) |
||||
interpolated_shares += [(lam * shares[i])] |
||||
recovered_secret = Scalar.sum(*interpolated_shares) |
||||
return recovered_secret |
||||
|
||||
|
||||
def test_recover_secret(): |
||||
f = Polynomial([23, 42]) |
||||
shares = [f(i) for i in [1, 2, 3]] |
||||
assert recover_secret([1, 2], [shares[0], shares[1]]) == f.coeffs[0] |
||||
assert recover_secret([1, 3], [shares[0], shares[2]]) == f.coeffs[0] |
||||
assert recover_secret([2, 3], [shares[1], shares[2]]) == f.coeffs[0] |
||||
|
||||
|
||||
def check_correctness_dkg_output(t, n, dkg_outputs: List[simplpedpop.DKGOutput]): |
||||
assert len(dkg_outputs) == n + 1 |
||||
secshares = [out[0] for out in dkg_outputs] |
||||
threshold_pubkeys = [out[1] for out in dkg_outputs] |
||||
pubshares = [out[2] for out in dkg_outputs] |
||||
|
||||
# Check that the threshold pubkey and pubshares are the same for the |
||||
# coordinator (at [0]) and all participants (at [1:n + 1]). |
||||
for i in range(n + 1): |
||||
assert threshold_pubkeys[0] == threshold_pubkeys[i] |
||||
assert len(pubshares[i]) == n |
||||
assert pubshares[0] == pubshares[i] |
||||
threshold_pubkey = threshold_pubkeys[0] |
||||
|
||||
# Check that the coordinator has no secret share |
||||
assert secshares[0] is None |
||||
|
||||
# Check that each secshare matches the corresponding pubshare |
||||
secshares_scalar = [ |
||||
None if secshare is None else Scalar.from_bytes(secshare) |
||||
for secshare in secshares |
||||
] |
||||
for i in range(1, n + 1): |
||||
assert secshares_scalar[i] * G == GE.from_bytes_compressed(pubshares[0][i - 1]) |
||||
|
||||
# Check that all combinations of t participants can recover the threshold pubkey |
||||
for tsubset in combinations(range(1, n + 1), t): |
||||
recovered = recover_secret(tsubset, [secshares_scalar[i] for i in tsubset]) |
||||
assert recovered * G == GE.from_bytes_compressed(threshold_pubkey) |
||||
|
||||
|
||||
@pytest.mark.parametrize('t,n,simulate_dkg,recovery,investigation', [ |
||||
[1, 1, simulate_simplpedpop, False, False], |
||||
[1, 1, simulate_simplpedpop, False, True], |
||||
[1, 1, simulate_encpedpop, False, False], |
||||
[1, 1, simulate_encpedpop, False, True], |
||||
[1, 1, simulate_chilldkg, True, False], |
||||
[1, 1, simulate_chilldkg, True, True], |
||||
[1, 1, simulate_chilldkg_full, True, False], |
||||
|
||||
[1, 2, simulate_simplpedpop, False, False], |
||||
[1, 2, simulate_simplpedpop, False, True], |
||||
[1, 2, simulate_encpedpop, False, False], |
||||
[1, 2, simulate_encpedpop, False, True], |
||||
[1, 2, simulate_chilldkg, True, False], |
||||
[1, 2, simulate_chilldkg, True, True], |
||||
[1, 2, simulate_chilldkg_full, True, False], |
||||
|
||||
[2, 2, simulate_simplpedpop, False, False], |
||||
[2, 2, simulate_simplpedpop, False, True], |
||||
[2, 2, simulate_encpedpop, False, False], |
||||
[2, 2, simulate_encpedpop, False, True], |
||||
[2, 2, simulate_chilldkg, True, False], |
||||
[2, 2, simulate_chilldkg, True, True], |
||||
[2, 2, simulate_chilldkg_full, True, False], |
||||
|
||||
[2, 3, simulate_simplpedpop, False, False], |
||||
[2, 3, simulate_simplpedpop, False, True], |
||||
[2, 3, simulate_encpedpop, False, False], |
||||
[2, 3, simulate_encpedpop, False, True], |
||||
[2, 3, simulate_chilldkg, True, False], |
||||
[2, 3, simulate_chilldkg, True, True], |
||||
[2, 3, simulate_chilldkg_full, True, False], |
||||
|
||||
[2, 5, simulate_simplpedpop, False, False], |
||||
[2, 5, simulate_simplpedpop, False, True], |
||||
[2, 5, simulate_encpedpop, False, False], |
||||
[2, 5, simulate_encpedpop, False, True], |
||||
[2, 5, simulate_chilldkg, True, False], |
||||
[2, 5, simulate_chilldkg, True, True], |
||||
[2, 5, simulate_chilldkg_full, True, False], |
||||
]) |
||||
def test_correctness(t, n, simulate_dkg, recovery, investigation): |
||||
seeds = [None] + [random_bytes(32) for _ in range(n)] |
||||
|
||||
rets = simulate_dkg(seeds[1:], t, investigation=investigation) |
||||
if investigation: |
||||
assert rets is None |
||||
# The session has failed correctly, so there's nothing further to check. |
||||
return |
||||
|
||||
# rets[0] are the return values from the coordinator |
||||
# rets[1 : n + 1] are from the participants |
||||
assert len(rets) == n + 1 |
||||
dkg_outputs = [ret[0] for ret in rets] |
||||
check_correctness_dkg_output(t, n, dkg_outputs) |
||||
|
||||
eqs_or_recs = [ret[1] for ret in rets] |
||||
for i in range(1, n + 1): |
||||
assert eqs_or_recs[0] == eqs_or_recs[i] |
||||
|
||||
if recovery: |
||||
rec = eqs_or_recs[0] |
||||
# Check correctness of chilldkg.recover |
||||
for i in range(n + 1): |
||||
(secshare, threshold_pubkey, pubshares), _ = chilldkg.recover(seeds[i], rec) |
||||
assert secshare == dkg_outputs[i][0] |
||||
assert threshold_pubkey == dkg_outputs[i][1] |
||||
assert pubshares == dkg_outputs[i][2] |
||||
@ -0,0 +1,505 @@
|
||||
# -*- coding: utf-8 -*- |
||||
|
||||
import json |
||||
import os |
||||
import sys |
||||
|
||||
from .trusted_keygen import trusted_dealer_keygen |
||||
|
||||
|
||||
def fromhex_all(l): |
||||
return [bytes.fromhex(l_i) for l_i in l] |
||||
|
||||
# Check that calling `try_fn` raises a `exception`. If `exception` is raised, |
||||
# examine it with `except_fn`. |
||||
def assert_raises(exception, try_fn, except_fn): |
||||
raised = False |
||||
try: |
||||
try_fn() |
||||
except exception as e: |
||||
raised = True |
||||
assert(except_fn(e)) |
||||
except BaseException: |
||||
raise AssertionError("Wrong exception raised in a test.") |
||||
if not raised: |
||||
raise AssertionError("Exception was _not_ raised in a test where it was required.") |
||||
|
||||
def get_error_details(test_case): |
||||
error = test_case["error"] |
||||
if error["type"] == "invalid_contribution": |
||||
exception = InvalidContributionError |
||||
if "contrib" in error: |
||||
except_fn = lambda e: e.id == error["signer_id"] and e.contrib == error["contrib"] |
||||
else: |
||||
except_fn = lambda e: e.id == error["signer_id"] |
||||
elif error["type"] == "value": |
||||
exception = ValueError |
||||
# except_fn = except_fn1 |
||||
except_fn = lambda e: str(e) == error["message"] |
||||
else: |
||||
raise RuntimeError(f"Invalid error type: {error['type']}") |
||||
return exception, except_fn |
||||
|
||||
def generate_frost_keys(max_participants: int, min_participants: int) -> Tuple[PlainPk, List[bytes], List[bytes], List[PlainPk]]: |
||||
if not (2 <= min_participants <= max_participants): |
||||
raise ValueError('values must satisfy: 2 <= min_participants <= max_participants') |
||||
|
||||
secret = secrets.randbelow(n - 1) + 1 |
||||
P, secshares, pubshares = trusted_dealer_keygen(secret, max_participants, min_participants) |
||||
|
||||
group_pk = PlainPk(cbytes(P)) |
||||
ser_identifiers = [bytes_from_int(secshare_i[0]) for secshare_i in secshares] |
||||
ser_secshares = [bytes_from_int(secshare_i[1]) for secshare_i in secshares] |
||||
ser_pubshares = [PlainPk(cbytes(pubshare_i)) for pubshare_i in pubshares] |
||||
return (group_pk, ser_identifiers, ser_secshares, ser_pubshares) |
||||
|
||||
def test_keygen_vectors(): |
||||
with open(os.path.join(sys.path[0], 'vectors', 'keygen_vectors.json')) as f: |
||||
test_data = json.load(f) |
||||
|
||||
valid_test_cases = test_data["valid_test_cases"] |
||||
for test_case in valid_test_cases: |
||||
max_participants = test_case["max_participants"] |
||||
min_participants = test_case["min_participants"] |
||||
group_pk = bytes.fromhex(test_case["group_public_key"]) |
||||
# assert the length using min & max participants? |
||||
ids = [bytes_from_int(i) for i in test_case["participant_identifiers"]] |
||||
pubshares = fromhex_all(test_case["participant_pubshares"]) |
||||
secshares = fromhex_all(test_case["participant_secshares"]) |
||||
|
||||
assert check_frost_key_compatibility(max_participants, min_participants, group_pk, ids, secshares, pubshares) == True |
||||
|
||||
pubshare_fail_test_cases = test_data["pubshare_correctness_fail_test_cases"] |
||||
for test_case in pubshare_fail_test_cases: |
||||
pubshares = fromhex_all(test_case["participant_pubshares"]) |
||||
secshares = fromhex_all(test_case["participant_secshares"]) |
||||
|
||||
assert check_pubshares_correctness(secshares, pubshares) == False |
||||
|
||||
group_pubkey_fail_test_cases = test_data["group_pubkey_correctness_fail_test_cases"] |
||||
for test_case in group_pubkey_fail_test_cases: |
||||
max_participants = test_case["max_participants"] |
||||
min_participants = test_case["min_participants"] |
||||
group_pk = bytes.fromhex(test_case["group_public_key"]) |
||||
ids = [bytes_from_int(i) for i in test_case["participant_identifiers"]] |
||||
pubshares = fromhex_all(test_case["participant_pubshares"]) |
||||
secshares = fromhex_all(test_case["participant_secshares"]) |
||||
|
||||
assert check_group_pubkey_correctness(min_participants, group_pk, ids, pubshares) == False |
||||
|
||||
def test_nonce_gen_vectors(): |
||||
with open(os.path.join(sys.path[0], 'vectors', 'nonce_gen_vectors.json')) as f: |
||||
test_data = json.load(f) |
||||
|
||||
for test_case in test_data["test_cases"]: |
||||
def get_value(key) -> bytes: |
||||
return bytes.fromhex(test_case[key]) |
||||
|
||||
def get_value_maybe(key) -> Optional[bytes]: |
||||
if test_case[key] is not None: |
||||
return get_value(key) |
||||
else: |
||||
return None |
||||
|
||||
rand_ = get_value("rand_") |
||||
secshare = get_value_maybe("secshare") |
||||
pubshare = get_value_maybe("pubshare") |
||||
if pubshare is not None: |
||||
pubshare = PlainPk(pubshare) |
||||
group_pk = get_value_maybe("group_pk") |
||||
if group_pk is not None: |
||||
group_pk = XonlyPk(group_pk) |
||||
msg = get_value_maybe("msg") |
||||
extra_in = get_value_maybe("extra_in") |
||||
expected_secnonce = get_value("expected_secnonce") |
||||
expected_pubnonce = get_value("expected_pubnonce") |
||||
|
||||
assert nonce_gen_internal(rand_, secshare, pubshare, group_pk, msg, extra_in) == (expected_secnonce, expected_pubnonce) |
||||
|
||||
def test_nonce_agg_vectors(): |
||||
with open(os.path.join(sys.path[0], 'vectors', 'nonce_agg_vectors.json')) as f: |
||||
test_data = json.load(f) |
||||
|
||||
pubnonces_list = fromhex_all(test_data["pubnonces"]) |
||||
valid_test_cases = test_data["valid_test_cases"] |
||||
error_test_cases = test_data["error_test_cases"] |
||||
|
||||
for test_case in valid_test_cases: |
||||
#todo: assert the min_participants <= len(pubnonces, ids) <= max_participants |
||||
#todo: assert the values of ids too? 1 <= id <= max_participants? |
||||
pubnonces = [pubnonces_list[i] for i in test_case["pubnonce_indices"]] |
||||
ids = [bytes_from_int(i) for i in test_case["participant_identifiers"]] |
||||
expected_aggnonce = bytes.fromhex(test_case["expected_aggnonce"]) |
||||
assert nonce_agg(pubnonces, ids) == expected_aggnonce |
||||
|
||||
for test_case in error_test_cases: |
||||
exception, except_fn = get_error_details(test_case) |
||||
pubnonces = [pubnonces_list[i] for i in test_case["pubnonce_indices"]] |
||||
ids = [bytes_from_int(i) for i in test_case["participant_identifiers"]] |
||||
assert_raises(exception, lambda: nonce_agg(pubnonces, ids), except_fn) |
||||
|
||||
# todo: include vectors from the frost draft too |
||||
# todo: add a test where group_pk is even (might need to modify json file) |
||||
def test_sign_verify_vectors(): |
||||
with open(os.path.join(sys.path[0], 'vectors', 'sign_verify_vectors.json')) as f: |
||||
test_data = json.load(f) |
||||
|
||||
max_participants = test_data["max_participants"] |
||||
min_participants = test_data["min_participants"] |
||||
group_pk = XonlyPk(bytes.fromhex(test_data["group_public_key"])) |
||||
secshare_p1 = bytes.fromhex(test_data["secshare_p1"]) |
||||
ids = test_data["identifiers"] |
||||
pubshares = fromhex_all(test_data["pubshares"]) |
||||
# The public key corresponding to the first participant (secshare_p1) is at index 0 |
||||
assert pubshares[0] == individual_pk(secshare_p1) |
||||
|
||||
secnonces_p1 = fromhex_all(test_data["secnonces_p1"]) |
||||
pubnonces = fromhex_all(test_data["pubnonces"]) |
||||
# The public nonce corresponding to first participant (secnonce_p1[0]) is at index 0 |
||||
k_1 = int_from_bytes(secnonces_p1[0][0:32]) |
||||
k_2 = int_from_bytes(secnonces_p1[0][32:64]) |
||||
R_s1 = point_mul(G, k_1) |
||||
R_s2 = point_mul(G, k_2) |
||||
assert R_s1 is not None and R_s2 is not None |
||||
assert pubnonces[0] == cbytes(R_s1) + cbytes(R_s2) |
||||
|
||||
aggnonces = fromhex_all(test_data["aggnonces"]) |
||||
msgs = fromhex_all(test_data["msgs"]) |
||||
|
||||
valid_test_cases = test_data["valid_test_cases"] |
||||
sign_error_test_cases = test_data["sign_error_test_cases"] |
||||
verify_fail_test_cases = test_data["verify_fail_test_cases"] |
||||
verify_error_test_cases = test_data["verify_error_test_cases"] |
||||
|
||||
for test_case in valid_test_cases: |
||||
ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]] |
||||
pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] |
||||
pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] |
||||
aggnonce_tmp = aggnonces[test_case["aggnonce_index"]] |
||||
# Make sure that pubnonces and aggnonce in the test vector are consistent |
||||
assert nonce_agg(pubnonces_tmp, ids_tmp) == aggnonce_tmp |
||||
msg = msgs[test_case["msg_index"]] |
||||
signer_index = test_case["signer_index"] |
||||
my_id = ids_tmp[signer_index] |
||||
expected = bytes.fromhex(test_case["expected"]) |
||||
|
||||
session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, [], [], msg) |
||||
# WARNING: An actual implementation should _not_ copy the secnonce. |
||||
# Reusing the secnonce, as we do here for testing purposes, can leak the |
||||
# secret key. |
||||
secnonce_tmp = bytearray(secnonces_p1[0]) |
||||
assert sign(secnonce_tmp, secshare_p1, my_id, session_ctx) == expected |
||||
assert partial_sig_verify(expected, ids_tmp, pubnonces_tmp, pubshares_tmp, [], [], msg, signer_index) |
||||
|
||||
for test_case in sign_error_test_cases: |
||||
exception, except_fn = get_error_details(test_case) |
||||
ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]] |
||||
pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] |
||||
aggnonce_tmp = aggnonces[test_case["aggnonce_index"]] |
||||
msg = msgs[test_case["msg_index"]] |
||||
signer_index = test_case["signer_index"] |
||||
my_id = bytes_from_int(test_case["signer_id"]) if signer_index is None else ids_tmp[signer_index] |
||||
secnonce_tmp = bytearray(secnonces_p1[test_case["secnonce_index"]]) |
||||
|
||||
session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, [], [], msg) |
||||
assert_raises(exception, lambda: sign(secnonce_tmp, secshare_p1, my_id, session_ctx), except_fn) |
||||
|
||||
for test_case in verify_fail_test_cases: |
||||
psig = bytes.fromhex(test_case["psig"]) |
||||
ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]] |
||||
pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] |
||||
pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] |
||||
msg = msgs[test_case["msg_index"]] |
||||
signer_index = test_case["signer_index"] |
||||
|
||||
assert not partial_sig_verify(psig, ids_tmp, pubnonces_tmp, pubshares_tmp, [], [], msg, signer_index) |
||||
|
||||
for test_case in verify_error_test_cases: |
||||
exception, except_fn = get_error_details(test_case) |
||||
|
||||
psig = bytes.fromhex(test_case["psig"]) |
||||
ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]] |
||||
pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] |
||||
pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] |
||||
msg = msgs[test_case["msg_index"]] |
||||
signer_index = test_case["signer_index"] |
||||
assert_raises(exception, lambda: partial_sig_verify(psig, ids_tmp, pubnonces_tmp, pubshares_tmp, [], [], msg, signer_index), except_fn) |
||||
|
||||
def test_tweak_vectors(): |
||||
with open(os.path.join(sys.path[0], 'vectors', 'tweak_vectors.json')) as f: |
||||
test_data = json.load(f) |
||||
|
||||
max_participants = test_data["max_participants"] |
||||
min_participants = test_data["min_participants"] |
||||
group_pk = XonlyPk(bytes.fromhex(test_data["group_public_key"])) |
||||
secshare_p1 = bytes.fromhex(test_data["secshare_p1"]) |
||||
ids = test_data["identifiers"] |
||||
pubshares = fromhex_all(test_data["pubshares"]) |
||||
# The public key corresponding to the first participant (secshare_p1) is at index 0 |
||||
assert pubshares[0] == individual_pk(secshare_p1) |
||||
|
||||
secnonce_p1 = bytearray(bytes.fromhex(test_data["secnonce_p1"])) |
||||
pubnonces = fromhex_all(test_data["pubnonces"]) |
||||
# The public nonce corresponding to first participant (secnonce_p1[0]) is at index 0 |
||||
k_1 = int_from_bytes(secnonce_p1[0:32]) |
||||
k_2 = int_from_bytes(secnonce_p1[32:64]) |
||||
R_s1 = point_mul(G, k_1) |
||||
R_s2 = point_mul(G, k_2) |
||||
assert R_s1 is not None and R_s2 is not None |
||||
assert pubnonces[0] == cbytes(R_s1) + cbytes(R_s2) |
||||
|
||||
aggnonces = fromhex_all(test_data["aggnonces"]) |
||||
tweaks = fromhex_all(test_data["tweaks"]) |
||||
|
||||
msg = bytes.fromhex(test_data["msg"]) |
||||
|
||||
valid_test_cases = test_data["valid_test_cases"] |
||||
error_test_cases = test_data["error_test_cases"] |
||||
|
||||
for test_case in valid_test_cases: |
||||
ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]] |
||||
pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] |
||||
pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] |
||||
aggnonce_tmp = aggnonces[test_case["aggnonce_index"]] |
||||
# Make sure that pubnonces and aggnonce in the test vector are consistent |
||||
assert nonce_agg(pubnonces_tmp, ids_tmp) == aggnonce_tmp |
||||
tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]] |
||||
tweak_modes_tmp = test_case["is_xonly"] |
||||
signer_index = test_case["signer_index"] |
||||
my_id = ids_tmp[signer_index] |
||||
expected = bytes.fromhex(test_case["expected"]) |
||||
|
||||
session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg) |
||||
# WARNING: An actual implementation should _not_ copy the secnonce. |
||||
# Reusing the secnonce, as we do here for testing purposes, can leak the |
||||
# secret key. |
||||
secnonce_tmp = bytearray(secnonce_p1) |
||||
assert sign(secnonce_tmp, secshare_p1, my_id, session_ctx) == expected |
||||
assert partial_sig_verify(expected, ids_tmp, pubnonces_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg, signer_index) |
||||
|
||||
for test_case in error_test_cases: |
||||
exception, except_fn = get_error_details(test_case) |
||||
ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]] |
||||
pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] |
||||
aggnonce_tmp = aggnonces[test_case["aggnonce_index"]] |
||||
tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]] |
||||
tweak_modes_tmp = test_case["is_xonly"] |
||||
signer_index = test_case["signer_index"] |
||||
my_id = ids_tmp[signer_index] |
||||
|
||||
session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg) |
||||
assert_raises(exception, lambda: sign(secnonce_p1, secshare_p1, my_id, session_ctx), except_fn) |
||||
|
||||
def test_det_sign_vectors(): |
||||
with open(os.path.join(sys.path[0], 'vectors', 'det_sign_vectors.json')) as f: |
||||
test_data = json.load(f) |
||||
|
||||
max_participants = test_data["max_participants"] |
||||
min_participants = test_data["min_participants"] |
||||
group_pk = XonlyPk(bytes.fromhex(test_data["group_public_key"])) |
||||
secshare_p1 = bytes.fromhex(test_data["secshare_p1"]) |
||||
ids = test_data["identifiers"] |
||||
pubshares = fromhex_all(test_data["pubshares"]) |
||||
# The public key corresponding to the first participant (secshare_p1) is at index 0 |
||||
assert pubshares[0] == individual_pk(secshare_p1) |
||||
|
||||
msgs = fromhex_all(test_data["msgs"]) |
||||
|
||||
valid_test_cases = test_data["valid_test_cases"] |
||||
sign_error_test_cases = test_data["sign_error_test_cases"] |
||||
|
||||
for test_case in valid_test_cases: |
||||
ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]] |
||||
pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] |
||||
aggothernonce = bytes.fromhex(test_case["aggothernonce"]) |
||||
tweaks = fromhex_all(test_case["tweaks"]) |
||||
is_xonly = test_case["is_xonly"] |
||||
msg = msgs[test_case["msg_index"]] |
||||
signer_index = test_case["signer_index"] |
||||
my_id = ids_tmp[signer_index] |
||||
rand = bytes.fromhex(test_case["rand"]) if test_case["rand"] is not None else None |
||||
expected = fromhex_all(test_case["expected"]) |
||||
|
||||
pubnonce, psig = deterministic_sign(secshare_p1, my_id, aggothernonce, ids_tmp, pubshares_tmp, tweaks, is_xonly, msg, rand) |
||||
assert pubnonce == expected[0] |
||||
assert psig == expected[1] |
||||
|
||||
pubnonces = [aggothernonce, pubnonce] |
||||
aggnonce_tmp = nonce_agg(pubnonces, [AGGREGATOR_ID, my_id]) |
||||
session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, tweaks, is_xonly, msg) |
||||
assert partial_sig_verify_internal(psig, my_id, pubnonce, pubshares_tmp[signer_index], session_ctx) |
||||
|
||||
for test_case in sign_error_test_cases: |
||||
exception, except_fn = get_error_details(test_case) |
||||
ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]] |
||||
pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] |
||||
aggothernonce = bytes.fromhex(test_case["aggothernonce"]) |
||||
tweaks = fromhex_all(test_case["tweaks"]) |
||||
is_xonly = test_case["is_xonly"] |
||||
msg = msgs[test_case["msg_index"]] |
||||
signer_index = test_case["signer_index"] |
||||
my_id = bytes_from_int(test_case["signer_id"]) if signer_index is None else ids_tmp[signer_index] |
||||
rand = bytes.fromhex(test_case["rand"]) if test_case["rand"] is not None else None |
||||
|
||||
try_fn = lambda: deterministic_sign(secshare_p1, my_id, aggothernonce, ids_tmp, pubshares_tmp, tweaks, is_xonly, msg, rand) |
||||
assert_raises(exception, try_fn, except_fn) |
||||
|
||||
def test_sig_agg_vectors(): |
||||
with open(os.path.join(sys.path[0], 'vectors', 'sig_agg_vectors.json')) as f: |
||||
test_data = json.load(f) |
||||
|
||||
max_participants = test_data["max_participants"] |
||||
min_participants = test_data["min_participants"] |
||||
group_pk = XonlyPk(bytes.fromhex(test_data["group_public_key"])) |
||||
ids = test_data["identifiers"] |
||||
pubshares = fromhex_all(test_data["pubshares"]) |
||||
# These nonces are only required if the tested API takes the individual |
||||
# nonces and not the aggregate nonce. |
||||
pubnonces = fromhex_all(test_data["pubnonces"]) |
||||
|
||||
tweaks = fromhex_all(test_data["tweaks"]) |
||||
psigs = fromhex_all(test_data["psigs"]) |
||||
msg = bytes.fromhex(test_data["msg"]) |
||||
|
||||
valid_test_cases = test_data["valid_test_cases"] |
||||
error_test_cases = test_data["error_test_cases"] |
||||
|
||||
for test_case in valid_test_cases: |
||||
ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]] |
||||
pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] |
||||
pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] |
||||
aggnonce_tmp = bytes.fromhex(test_case["aggnonce"]) |
||||
# Make sure that pubnonces and aggnonce in the test vector are consistent |
||||
assert aggnonce_tmp == nonce_agg(pubnonces_tmp, ids_tmp) |
||||
|
||||
tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]] |
||||
tweak_modes_tmp = test_case["is_xonly"] |
||||
psigs_tmp = [psigs[i] for i in test_case["psig_indices"]] |
||||
expected = bytes.fromhex(test_case["expected"]) |
||||
|
||||
session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg) |
||||
# Make sure that the partial signatures in the test vector are consistent. The tested API takes only aggnonce (not pubnonces list), this check can be ignored |
||||
for i in range(len(ids_tmp)): |
||||
partial_sig_verify(psigs_tmp[i], ids_tmp, pubnonces_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg, i) |
||||
|
||||
bip340sig = partial_sig_agg(psigs_tmp, ids_tmp, session_ctx) |
||||
assert bip340sig == expected |
||||
tweaked_group_pk = get_xonly_pk(group_pubkey_and_tweak(pubshares_tmp, ids_tmp, tweaks_tmp, tweak_modes_tmp)) |
||||
assert schnorr_verify(msg, tweaked_group_pk, bip340sig) |
||||
|
||||
for test_case in error_test_cases: |
||||
exception, except_fn = get_error_details(test_case) |
||||
|
||||
ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]] |
||||
pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]] |
||||
pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]] |
||||
aggnonce_tmp = bytes.fromhex(test_case["aggnonce"]) |
||||
|
||||
tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]] |
||||
tweak_modes_tmp = test_case["is_xonly"] |
||||
psigs_tmp = [psigs[i] for i in test_case["psig_indices"]] |
||||
|
||||
session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg) |
||||
assert_raises(exception, lambda: partial_sig_agg(psigs_tmp, ids_tmp, session_ctx), except_fn) |
||||
|
||||
def test_sign_and_verify_random(iterations: int) -> None: |
||||
for itr in range(iterations): |
||||
secure_rng = secrets.SystemRandom() |
||||
# randomly choose a number: 2 <= number <= 10 |
||||
max_participants = secure_rng.randrange(2, 11) |
||||
# randomly choose a number: 2 <= number <= max_participants |
||||
min_participants = secure_rng.randrange(2, max_participants + 1) |
||||
|
||||
group_pk, ids, secshares, pubshares = generate_frost_keys(max_participants, min_participants) |
||||
assert len(ids) == len(secshares) == len(pubshares) == max_participants |
||||
assert check_frost_key_compatibility(max_participants, min_participants, group_pk, ids, secshares, pubshares) |
||||
|
||||
# randomly choose the signer set, with len: min_participants <= len <= max_participants |
||||
signer_count = secure_rng.randrange(min_participants, max_participants + 1) |
||||
signer_indices = secure_rng.sample(range(max_participants), signer_count) |
||||
assert len(set(signer_indices)) == signer_count # signer set must not contain duplicate ids |
||||
|
||||
signer_ids = [ids[i] for i in signer_indices] |
||||
signer_pubshares = [pubshares[i] for i in signer_indices] |
||||
# NOTE: secret values MUST NEVER BE COPIED!!! |
||||
# we do it here to improve the code readability |
||||
signer_secshares = [secshares[i] for i in signer_indices] |
||||
|
||||
|
||||
# In this example, the message and group pubkey are known |
||||
# before nonce generation, so they can be passed into the nonce |
||||
# generation function as a defense-in-depth measure to protect |
||||
# against nonce reuse. |
||||
# |
||||
# If these values are not known when nonce_gen is called, empty |
||||
# byte arrays can be passed in for the corresponding arguments |
||||
# instead. |
||||
msg = secrets.token_bytes(32) |
||||
v = secrets.randbelow(4) |
||||
tweaks = [secrets.token_bytes(32) for _ in range(v)] |
||||
tweak_modes = [secrets.choice([False, True]) for _ in range(v)] |
||||
tweaked_group_pk = get_xonly_pk(group_pubkey_and_tweak(signer_pubshares, signer_ids, tweaks, tweak_modes)) |
||||
|
||||
signer_secnonces = [] |
||||
signer_pubnonces = [] |
||||
for i in range(signer_count - 1): |
||||
# Use a clock for extra_in |
||||
t = time.clock_gettime_ns(time.CLOCK_MONOTONIC) |
||||
secnonce_i, pubnonce_i = nonce_gen(signer_secshares[i], signer_pubshares[i], tweaked_group_pk, msg, t.to_bytes(8, 'big')) |
||||
signer_secnonces.append(secnonce_i) |
||||
signer_pubnonces.append(pubnonce_i) |
||||
|
||||
# On even iterations use regular signing algorithm for the final signer, |
||||
# otherwise use deterministic signing algorithm |
||||
if itr % 2 == 0: |
||||
t = time.clock_gettime_ns(time.CLOCK_MONOTONIC) |
||||
secnonce_final, pubnonce_final = nonce_gen(signer_secshares[-1], signer_pubshares[-1], tweaked_group_pk, msg, t.to_bytes(8, 'big')) |
||||
signer_secnonces.append(secnonce_final) |
||||
else: |
||||
aggothernonce = nonce_agg(signer_pubnonces, signer_ids[:-1]) |
||||
rand = secrets.token_bytes(32) |
||||
pubnonce_final, psig_final = deterministic_sign(signer_secshares[-1], signer_ids[-1], aggothernonce, signer_ids, signer_pubshares, tweaks, tweak_modes, msg, rand) |
||||
|
||||
signer_pubnonces.append(pubnonce_final) |
||||
aggnonce = nonce_agg(signer_pubnonces, signer_ids) |
||||
session_ctx = SessionContext(aggnonce, signer_ids, signer_pubshares, tweaks, tweak_modes, msg) |
||||
|
||||
signer_psigs = [] |
||||
for i in range(signer_count): |
||||
if itr % 2 != 0 and i == signer_count - 1: |
||||
psig_i = psig_final # last signer would have already deterministically signed |
||||
else: |
||||
psig_i = sign(signer_secnonces[i], signer_secshares[i], signer_ids[i], session_ctx) |
||||
assert partial_sig_verify(psig_i, signer_ids, signer_pubnonces, signer_pubshares, tweaks, tweak_modes, msg, i) |
||||
signer_psigs.append(psig_i) |
||||
|
||||
# An exception is thrown if secnonce is accidentally reused |
||||
assert_raises(ValueError, lambda: sign(signer_secnonces[0], signer_secshares[0], signer_ids[0], session_ctx), lambda e: True) |
||||
|
||||
# Wrong signer index |
||||
assert not partial_sig_verify(signer_psigs[0], signer_ids, signer_pubnonces, signer_pubshares, tweaks, tweak_modes, msg, 1) |
||||
# Wrong message |
||||
assert not partial_sig_verify(signer_psigs[0], signer_ids, signer_pubnonces, signer_pubshares, tweaks, tweak_modes, secrets.token_bytes(32), 0) |
||||
|
||||
bip340sig = partial_sig_agg(signer_psigs, signer_ids, session_ctx) |
||||
assert schnorr_verify(msg, tweaked_group_pk, bip340sig) |
||||
|
||||
def run_test(test_name, test_func): |
||||
max_len = 30 |
||||
test_name = test_name.ljust(max_len, ".") |
||||
print(f"Running {test_name}...", end="", flush=True) |
||||
try: |
||||
test_func() |
||||
print("Passed!") |
||||
except Exception as e: |
||||
print(f"Failed :'(\nError: {e}") |
||||
|
||||
if __name__ == '__main__': |
||||
run_test("test_keygen_vectors", test_keygen_vectors) |
||||
run_test("test_nonce_gen_vectors", test_nonce_gen_vectors) |
||||
run_test("test_nonce_agg_vectors", test_nonce_agg_vectors) |
||||
run_test("test_sign_verify_vectors", test_sign_verify_vectors) |
||||
run_test("test_tweak_vectors", test_tweak_vectors) |
||||
run_test("test_det_sign_vectors", test_det_sign_vectors) |
||||
run_test("test_sig_agg_vectors", test_sig_agg_vectors) |
||||
run_test("test_sign_and_verify_random", lambda: test_sign_and_verify_random(6)) |
||||
@ -0,0 +1,283 @@
|
||||
{ |
||||
"max_participants": 5, |
||||
"min_participants": 3, |
||||
"group_public_key": "037940B3ED1FDC360252A6F48058C7B94276DFB6AA2B7D51706FB48326B19E7AE1", |
||||
"secshare_p1":"81D0D40CDF044588167A987C14552954DB187AC5AD3B1CA40D7B03DCA32AFDFB", |
||||
"identifiers": [1, 2, 3, 4, 5], |
||||
"pubshares": [ |
||||
"02BB66437FCAA01292BFB4BB6F19D67818FE693215C36C4663857F1DC8AB8BF4FA", |
||||
"02C3250013C86AA9C3011CD40B2658CBC5B950DD21FFAA4EDE1BB66E18A063CED5", |
||||
"03259D7068335012C08C5D80E181969ED7FFA08F7973E3ED9C8C0BFF3EC03C223E", |
||||
"02A22971750242F6DA35B8DB0DFE74F38A3227118B296ADD2C65E324E2B7EB20AD", |
||||
"03541293535BB662F8294C4BEB7EA25F55FEAE86C6BAE0CEBD741EAAA28639A6E6", |
||||
"020000000000000000000000000000000000000000000000000000000000000007" |
||||
], |
||||
"msgs": [ |
||||
"F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF", |
||||
"", |
||||
"2626262626262626262626262626262626262626262626262626262626262626262626262626" |
||||
], |
||||
"valid_test_cases": [ |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"expected": [ |
||||
"038E14A90FB2C66535B42850F009E2F1857000433042EE647066034FDE7F5A3F3C026CD7BDD51BE1490486F1E905B90020CB8294AFE7B6A051069C07D3B2FD9DC12A", |
||||
"89FA301CA35D6BD839089D0EBA7EA16B2C90818103BAA85F92FE6C01F0E0FB61" |
||||
], |
||||
"comment": "Signing with minimum number of participants" |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [1, 0, 2], |
||||
"pubshare_indices": [1, 0, 2], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 1, |
||||
"expected": [ |
||||
"038E14A90FB2C66535B42850F009E2F1857000433042EE647066034FDE7F5A3F3C026CD7BDD51BE1490486F1E905B90020CB8294AFE7B6A051069C07D3B2FD9DC12A", |
||||
"89FA301CA35D6BD839089D0EBA7EA16B2C90818103BAA85F92FE6C01F0E0FB61" |
||||
], |
||||
"comment": "Partial-signature shouldn't change if the order of signers set changes. Note: The deterministic sign will generate the same secnonces due to unchanged parameters" |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [2, 1, 0], |
||||
"pubshare_indices": [2, 1, 0], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 2, |
||||
"expected": [ |
||||
"038E14A90FB2C66535B42850F009E2F1857000433042EE647066034FDE7F5A3F3C026CD7BDD51BE1490486F1E905B90020CB8294AFE7B6A051069C07D3B2FD9DC12A", |
||||
"89FA301CA35D6BD839089D0EBA7EA16B2C90818103BAA85F92FE6C01F0E0FB61" |
||||
], |
||||
"comment": "Partial-signature shouldn't change if the order of signers set changes. Note: The deterministic sign will generate the same secnonces due to unchanged parameters" |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [0, 3, 4], |
||||
"pubshare_indices": [0, 3, 4], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"expected": [ |
||||
"038E14A90FB2C66535B42850F009E2F1857000433042EE647066034FDE7F5A3F3C026CD7BDD51BE1490486F1E905B90020CB8294AFE7B6A051069C07D3B2FD9DC12A", |
||||
"E5C27E441A9D433CDC4A36F669967E4304435CE5E6E7722D871237C3B4A2EC99" |
||||
], |
||||
"comment": "Partial-signature changes if the members of signers set changes" |
||||
}, |
||||
{ |
||||
"rand": null, |
||||
"aggothernonce": "02D26EF7E09A4BC0A2CF295720C64BAD56A28EF50B6BECBD59AF6F3ADE6C2480C503D11B9993AE4C2D38EA2591287F7B744976F0F0B79104B96D6399507FC533E893", |
||||
"id_indices": [0, 1, 2, 3], |
||||
"pubshare_indices": [0, 1, 2, 3], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"expected": [ |
||||
"02EEE6300500FB508012424A0F47621F9A844A939020DD64C4254939D848B675A5037BDEA362CBE55D6D36A7635FC21ED8AC2FA05E9B63A8242E07969F6E2D4E36E5", |
||||
"97440C51FCB602911455E6147938F5B81C0C1AF32ADAFD98F5A66A4616289D5D" |
||||
], |
||||
"comment": "Signing without auxiliary randomness" |
||||
}, |
||||
{ |
||||
"rand": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF", |
||||
"aggothernonce": "03C7E3D6456228347B658911BF612967F36C7791C24F9607ADB34E09F8CC1126D803D2C9C6E3D1A11463F8C2D57B145A814F5D44FD1A42F7A024140AC30D48EE0BEE", |
||||
"id_indices": [0, 1, 2, 3, 4], |
||||
"pubshare_indices": [0, 1, 2, 3, 4], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"expected": [ |
||||
"020EBAD8A2F6099A0A0A62439F0A2A0E7DF6918DDE55183AFFF112DF2940FF76DE026C4A1C132CF16CFCFC28FEB02651C44719C900DF6F16407711CA8DB31E2A46B8", |
||||
"83271933ECB71C566F3BA61A645B1396251CBF7EDA77B1D2AF5C689003AB631B" |
||||
], |
||||
"comment": "Signing with maximum number of participants and maximum auxiliary randomness value" |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 1, |
||||
"signer_index": 0, |
||||
"expected": [ |
||||
"0203375B47194F99B8B682B9DCDFB972A066C243BC7AA951A792FF02A707A3C7870367C40EE43583D0FC0F44696BED09D9B89652FC45B738FF03AF8ECA854A5424B1", |
||||
"2D2F6A697B0632291E3240D9E48F82A454EEB9F566987CB5E7612C0B75D41208" |
||||
], |
||||
"comment": "Empty message" |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 2, |
||||
"signer_index": 0, |
||||
"expected": [ |
||||
"0256B5FD4623C09A0E073CE04FF488DA0C4319A528CBA3FC26307682AD2CAD069003F8E94981F0D4A0A879CFAEEE0A060DF1E12889FB7C3CEAC498310827F63CBDE2", |
||||
"347C67E959FCA9460F907C0D2CAF5DD427E5CFD7E15330BA38DA6E986ED91B0E" |
||||
], |
||||
"comment": "Message longer than 32 bytes (38-byte msg)" |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"tweaks": ["E8F791FF9225A2AF0102AFFF4A9A723D9612A682A25EBE79802B263CDFCD83BB"], |
||||
"is_xonly": [true], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"expected": [ |
||||
"0341E28C13AB55A689C4698F31AD68250636B9E41FACCB0D358B4BD9A3DF09B1920311E0CED48F4B3B51E010159D3657FD8EC9DFF1FD30AD28FC126F62AA1C53C451", |
||||
"817169757CF62879BCB2F1DFE895E6781664CA0D18534290C22EC0E40187B7FC" |
||||
], |
||||
"comment": "Tweaked group public key" |
||||
} |
||||
], |
||||
"sign_error_test_cases": [ |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [3, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": null, |
||||
"signer_id": 1, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The signer's id must be present in the participant identifier list." |
||||
}, |
||||
"comment": "The signer's id is not in the participant identifier list." |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [0, 1, 2, 1], |
||||
"pubshare_indices": [0, 1, 2, 1], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The participant identifier list must contain unique elements." |
||||
}, |
||||
"comment": "The participant identifier list contains duplicate elements." |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [3, 1, 2], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The signer's pubshare must be included in the list of pubshares." |
||||
}, |
||||
"comment": "The signer's pubshare is not in the list of pubshares. This test case is optional: it can be skipped by implementations that do not check that the signer's pubshare is included in the list of pubshares." |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The pubshares and ids arrays must have the same length." |
||||
}, |
||||
"comment": "The participant identifiers count exceed the participant public shares count" |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 5], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": 3, |
||||
"contrib": "pubshare" |
||||
}, |
||||
"comment": "Signer 3 provided an invalid participant public share" |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "048465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61037496A3CC86926D452CAFCFD55D25972CA1675D549310DE296BFF42F72EEEA8C9", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": null, |
||||
"contrib": "aggothernonce" |
||||
}, |
||||
"comment": "aggothernonce is invalid due wrong tag, 0x04, in the first half" |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "0000000000000000000000000000000000000000000000000000000000000000000287BF891D2A6DEAEBADC909352AA9405D1428C15F4B75F04DAE642A95C2548480", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"tweaks": [], |
||||
"is_xonly": [], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": null, |
||||
"contrib": "aggothernonce" |
||||
}, |
||||
"comment": "aggothernonce is invalid because first half corresponds to point at infinity" |
||||
}, |
||||
{ |
||||
"rand": "0000000000000000000000000000000000000000000000000000000000000000", |
||||
"aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"tweaks": ["FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141"], |
||||
"is_xonly": [false], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The tweak must be less than n." |
||||
}, |
||||
"comment": "Tweak is invalid because it exceeds group size" |
||||
} |
||||
] |
||||
} |
||||
@ -0,0 +1,78 @@
|
||||
{ |
||||
"valid_test_cases": [ |
||||
{ |
||||
"max_participants": 3, |
||||
"min_participants": 2, |
||||
"group_public_key": "02F37C34B66CED1FB51C34A90BDAE006901F10625CC06C4F64663B0EAE87D87B4F", |
||||
"participant_identifiers": [1, 2, 3], |
||||
"participant_pubshares": [ |
||||
"026BAEE4BF7D4B9C4567DFFF6F3C2C76DF5C082E9320CD8187D6AB5965BC5A119A", |
||||
"03DACC9463E5186F3C81AE1B314F7B09001A22B28BB56AD0ABD3F376818F9604AB", |
||||
"031404710E938032DB0D4F6A4CD20AE37384BE98BA9FE05B42D139361202B391E6" |
||||
|
||||
], |
||||
"participant_secshares": [ |
||||
"08F89FFE80AC94DCB920C26F3F46140BFC7F95B493F8310F5FC1EA2B01F4254C", |
||||
"04F0FEAC2EDCEDC6CE1253B7FAB8C86B856A797F44D83D82A385554E6E401984", |
||||
"00E95D59DD0D46B0E303E500B62B7CCB0E555D49F5B849F5E748C071DA8C0DBC" |
||||
] |
||||
}, |
||||
{ |
||||
"max_participants": 5, |
||||
"min_participants": 3, |
||||
"group_public_key": "037940B3ED1FDC360252A6F48058C7B94276DFB6AA2B7D51706FB48326B19E7AE1", |
||||
"participant_identifiers": [1, 2, 3, 4, 5], |
||||
"participant_pubshares": [ |
||||
"02BB66437FCAA01292BFB4BB6F19D67818FE693215C36C4663857F1DC8AB8BF4FA", |
||||
"02C3250013C86AA9C3011CD40B2658CBC5B950DD21FFAA4EDE1BB66E18A063CED5", |
||||
"03259D7068335012C08C5D80E181969ED7FFA08F7973E3ED9C8C0BFF3EC03C223E", |
||||
"02A22971750242F6DA35B8DB0DFE74F38A3227118B296ADD2C65E324E2B7EB20AD", |
||||
"03541293535BB662F8294C4BEB7EA25F55FEAE86C6BAE0CEBD741EAAA28639A6E6" |
||||
|
||||
], |
||||
"participant_secshares": [ |
||||
"81D0D40CDF044588167A987C14552954DB187AC5AD3B1CA40D7B03DCA32AFDFB", |
||||
"10130412FDB9A10F7DF862CE8763311B7D1B7AACF211ED32272F0DAC49DF6743", |
||||
"1362A14AE07243C93C24E7EEA3FB8C619338C24925F8E5E488DAE1D3DE7B2236", |
||||
"8BBFABB4872E2DB5510027DC6A1E3B271D70519A48F006BB327E805360FE2ED4", |
||||
"792A234FF1ED5ED3BC8A2297D9CB3D6D61134BB9ABAEAF7A64478A9E01324BDC" |
||||
] |
||||
} |
||||
], |
||||
"pubshare_correctness_fail_test_cases": [ |
||||
{ |
||||
"max_participants": 2, |
||||
"min_participants": 2, |
||||
"group_public_key": "0256C92CA18AD18E5E14075E4CDA4C9471E1F69EFF06DA31B9DB8C431697457C96", |
||||
"participant_identifiers": [1, 2], |
||||
"participant_pubshares": [ |
||||
"02EF27116868EEC72F1AEF13F0383A83479DB7DFBDE55B568ADC0ABC28B0C82AEB", |
||||
"0381EE46DB9582B6AA84AB1F39CAAD930899B44ACCB75EDFFBB29CDB8E2136F2A7" |
||||
], |
||||
"participant_secshares": [ |
||||
"1903097297A1E0FD75FCBCDB66DC21C65ACEC527100566459F1BBF2FA7388D53", |
||||
"B9B2CD71F1C09B8D6F675D05CDF1396B28FF626CD8C69B9DF4D3B6BDCB57EFF2" |
||||
], |
||||
"comment": "Invalid pubshare (parity flipped) for participant 2" |
||||
} |
||||
], |
||||
"group_pubkey_correctness_fail_test_cases": [ |
||||
{ |
||||
"max_participants": 3, |
||||
"min_participants": 3, |
||||
"group_public_key": "0354F1E67AAFFB49654AF3EE5B0C68D8CF24468D014453F1F13B5221512A0BCE78", |
||||
"participant_identifiers": [1, 2, 3], |
||||
"participant_pubshares": [ |
||||
"037A01FF2705D679CDC34E04366CC3BA95BD9E883AC7E33B640D744BE6BCC2D140", |
||||
"039E2C0AE44EA1203606D04B711667C07D1695ADC36FBF07DD37B7ECA85490262C", |
||||
"027C782638AD6A8A95DEDF6CBA940E89E827EC5C4FCF693EAB7D70927C3CA59FDB" |
||||
], |
||||
"participant_secshares": [ |
||||
"A3236A9D6EF252A5C59F17B544ECE39487FFD80F158EB93F8AA4AF707BFA5511", |
||||
"7FA1BE8BCC29555EFAAC4B19D47E26467E056B9DE2F6E0B7B844940FD43D1047", |
||||
"8BACD727EA7C2156F476BFC8EF5B332FE0663464AC3F117C0B69D6460A4AD25D" |
||||
], |
||||
"comment": "Invalid group public key (parity flipped)" |
||||
} |
||||
] |
||||
} |
||||
@ -0,0 +1,56 @@
|
||||
{ |
||||
"pubnonces": [ |
||||
"020151C80F435648DF67A22B749CD798CE54E0321D034B92B709B567D60A42E66603BA47FBC1834437B3212E89A84D8425E7BF12E0245D98262268EBDCB385D50641", |
||||
"03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B833", |
||||
"020151C80F435648DF67A22B749CD798CE54E0321D034B92B709B567D60A42E6660279BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798", |
||||
"03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60379BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798", |
||||
"04FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B833", |
||||
"03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B831", |
||||
"03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A602FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC30" |
||||
], |
||||
"valid_test_cases": [ |
||||
{ |
||||
"pubnonce_indices": [0, 1], |
||||
"participant_identifiers": [1, 2], |
||||
"expected_aggnonce": "035FE1873B4F2967F52FEA4A06AD5A8ECCBE9D0FD73068012C894E2E87CCB5804B024725377345BDE0E9C33AF3C43C0A29A9249F2F2956FA8CFEB55C8573D0262DC8" |
||||
}, |
||||
{ |
||||
"pubnonce_indices": [2, 3], |
||||
"participant_identifiers": [1, 2], |
||||
"expected_aggnonce": "035FE1873B4F2967F52FEA4A06AD5A8ECCBE9D0FD73068012C894E2E87CCB5804B000000000000000000000000000000000000000000000000000000000000000000", |
||||
"comment": "Sum of second points encoded in the nonces is point at infinity which is serialized as 33 zero bytes" |
||||
} |
||||
], |
||||
"error_test_cases": [ |
||||
{ |
||||
"pubnonce_indices": [0, 4], |
||||
"participant_identifiers": [1, 2], |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": 2, |
||||
"contrib": "pubnonce" |
||||
}, |
||||
"comment": "Public nonce from signer 2 is invalid due wrong tag, 0x04, in the first half" |
||||
}, |
||||
{ |
||||
"pubnonce_indices": [5, 1], |
||||
"participant_identifiers": [1, 2], |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": 1, |
||||
"contrib": "pubnonce" |
||||
}, |
||||
"comment": "Public nonce from signer 1 is invalid because the second half does not correspond to an X coordinate" |
||||
}, |
||||
{ |
||||
"pubnonce_indices": [6, 1], |
||||
"participant_identifiers": [1, 2], |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": 1, |
||||
"contrib": "pubnonce" |
||||
}, |
||||
"comment": "Public nonce from signer 1 is invalid because second half exceeds field size" |
||||
} |
||||
] |
||||
} |
||||
@ -0,0 +1,48 @@
|
||||
{ |
||||
"test_cases": [ |
||||
{ |
||||
"rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F", |
||||
"secshare": "0202020202020202020202020202020202020202020202020202020202020202", |
||||
"pubshare": "024D4B6CD1361032CA9BD2AEB9D900AA4D45D9EAD80AC9423374C451A7254D0766", |
||||
"group_pk": "0707070707070707070707070707070707070707070707070707070707070707", |
||||
"msg": "0101010101010101010101010101010101010101010101010101010101010101", |
||||
"extra_in": "0808080808080808080808080808080808080808080808080808080808080808", |
||||
"expected_secnonce": "6135CE36209DB5E3E7B7A11ADE54D3028D3CFF089DA3C2EC7766921CC4FB3D1BBCD8A7035194A76F43D278C3CD541AEE014663A2251DDE34E8D900EDF1CAA3D9", |
||||
"expected_pubnonce": "02A5671568FD7AEA35369FE4A32530FD0D0A23D125627BEA374D9FA5676F645A6103EC4E899B1DBEFC08C51F48E3AFA8503759E9ECD3DE674D3C93FD0D92E15E631A", |
||||
"comment": "" |
||||
}, |
||||
{ |
||||
"rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F", |
||||
"secshare": "0202020202020202020202020202020202020202020202020202020202020202", |
||||
"pubshare": "024D4B6CD1361032CA9BD2AEB9D900AA4D45D9EAD80AC9423374C451A7254D0766", |
||||
"group_pk": "0707070707070707070707070707070707070707070707070707070707070707", |
||||
"msg": "", |
||||
"extra_in": "0808080808080808080808080808080808080808080808080808080808080808", |
||||
"expected_secnonce": "91EB573A7D57A17F1C7465D7301BCF90915B5731CDA408644819933DA3E366E354C8BF875D966C02C095428B4D2780AC8B929090EEE9AEF5E4DF250533FE9A08", |
||||
"expected_pubnonce": "0337513529D07800E8D1B7056456223BFA26B0C12C921ADC87114537D4A65E2E390257723240C10831A1DFD0489DAAA7DF204717EA27147DD9361480D984C763D8A2", |
||||
"comment": "Empty message" |
||||
}, |
||||
{ |
||||
"rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F", |
||||
"secshare": "0202020202020202020202020202020202020202020202020202020202020202", |
||||
"pubshare": "024D4B6CD1361032CA9BD2AEB9D900AA4D45D9EAD80AC9423374C451A7254D0766", |
||||
"group_pk": "0707070707070707070707070707070707070707070707070707070707070707", |
||||
"msg": "2626262626262626262626262626262626262626262626262626262626262626262626262626", |
||||
"extra_in": "0808080808080808080808080808080808080808080808080808080808080808", |
||||
"expected_secnonce": "379F4A71682AEF59A022272B7226F02A870F6958873726E33906E765AA36C71D70418EE5C83B76A6BC0E84F04F4F3D92DE83994400404EC8AE35CEA0ECD378AF", |
||||
"expected_pubnonce": "02A5EA11BCF1BC60AA96D1BE0816F373A57FD00991BE6106FD5AB1F6986FAA2BA0030AF6A8479B9C91958F256AEC38339FD25D4F42A073EEB862B42282E00F323A4C", |
||||
"comment": "38-byte message" |
||||
}, |
||||
{ |
||||
"rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F", |
||||
"secshare": null, |
||||
"pubshare": null, |
||||
"group_pk": null, |
||||
"msg": null, |
||||
"extra_in": null, |
||||
"expected_secnonce": "E8E239B64F9A4D2B03508C029EEFC8156A3AD899FD58B15759C93C7DA745C3550FABE3F7CDD361407B97C1353056310D1610D478633C5DDE04DEC4917591D2E5", |
||||
"expected_pubnonce": "0399059E50AF7B23F89E1ED7B17A7B24F2D746C663057F6C3B696A416C99C7A1070383C53B9CF236EADF8BDFEB1C3E9A188A1A84190687CD67916DF9BC60CD2D80EC", |
||||
"comment": "Every optional parameter is absent" |
||||
} |
||||
] |
||||
} |
||||
@ -0,0 +1,132 @@
|
||||
{ |
||||
"max_participants": 5, |
||||
"min_participants": 3, |
||||
"group_public_key": "037940B3ED1FDC360252A6F48058C7B94276DFB6AA2B7D51706FB48326B19E7AE1", |
||||
"identifiers": [1, 2, 3, 4, 5], |
||||
"pubshares": [ |
||||
"02BB66437FCAA01292BFB4BB6F19D67818FE693215C36C4663857F1DC8AB8BF4FA", |
||||
"02C3250013C86AA9C3011CD40B2658CBC5B950DD21FFAA4EDE1BB66E18A063CED5", |
||||
"03259D7068335012C08C5D80E181969ED7FFA08F7973E3ED9C8C0BFF3EC03C223E", |
||||
"02A22971750242F6DA35B8DB0DFE74F38A3227118B296ADD2C65E324E2B7EB20AD", |
||||
"03541293535BB662F8294C4BEB7EA25F55FEAE86C6BAE0CEBD741EAAA28639A6E6" |
||||
], |
||||
"pubnonces": [ |
||||
"021AD89905F193EC1CBED15CDD5F4F0E04FF187648390639C88AC291F2F88D407E02FD49462A942948DF6718EEE86FDD858B124375E6A034A4985D19EE95431E9E03", |
||||
"03A0640E5746CC90EC3EF04F133AF1B79DE67011927A9BA1510B9254E9C8698062037209BB6915B573D2E6394032E508B8285DD498FE8A85971AAB01ACF0C785A56B", |
||||
"02861EFD258C9099BEF14FA9B3B4E6229595D8200FC72D27F789D4CCC4352BB32B038496DA1C20DFE16D24D20F0374812347EE9CFF06928802C04A2D1B2D369F4628", |
||||
"0398DD496FFE3C14D2DDFB5D9FD1189DB829301A83C45F2A1DDF07238529F75D1D0233E8FF18899A66276D27AE5CE28A5170EEAAC4F05DEACC8E7DB1C55F8985495F", |
||||
"03C7B31E363526D04B5D31148EE6B042AF8CC7DFA922A42A69EB78B816D458D0B20257495EC72B1E59FB90A48B036FBD3D9AE4415C49B6171E108185124B99DE56AA" |
||||
], |
||||
"tweaks": [ |
||||
"B511DA492182A91B0FFB9A98020D55F260AE86D7ECBD0399C7383D59A5F2AF7C", |
||||
"A815FE049EE3C5AAB66310477FBC8BCCCAC2F3395F59F921C364ACD78A2F48DC", |
||||
"75448A87274B056468B977BE06EB1E9F657577B7320B0A3376EA51FD420D18A8" |
||||
], |
||||
"msg": "599C67EA410D005B9DA90817CF03ED3B1C868E4DA4EDF00A5880B0082C237869", |
||||
"psigs": [ |
||||
"447D69D4E02693E3F6C04E198F34E89E17D65DC29C92E635E8BFB8D2908DCA6A", |
||||
"E7E02FDE0FA66D116C0DCF932F7976D611A4D0CF225087C2B8282153E461FA8B", |
||||
"E84B98E0B132F4049B061A949EF69E3DFBEB3E2712AEE2DEE0C5B6D517860339", |
||||
"714B7FCF4D3EA2F4BB2B22F786AEBF0C65E1A6E6FBEF04C39B60EAA1CA257CD5", |
||||
"DA815BBE9D06203D5ADD3AD5D3FE5F0D5405939EFD7EA3FED6DACA9E5449AD80", |
||||
"8E367AE4000EEEFCEF7F83DA1AC96181DC51BA0D83E0F834F67A0CFD487DBEF7", |
||||
"9CAB74D0FBCF14D89330D81C85482B8C720DC69899187F3A5432A5856609E92D", |
||||
"351F38F8B3126944362D9B3F0D83791CF3D623E746B84A58012DF4C9383909EC", |
||||
"B9ABA5EE2181EDE7A0D3D29DB147741F66B5A8EF3BB6C9CFD1FAD0D98E5A8A93", |
||||
"A2DF2C5ECB1141E0B55F47711BBDAE491F2F22D967FA1D9569200B7FB0754AD6", |
||||
"441DFF8E4E0E8368D21BD3DD70F151C7C581EC2B1931B8F041CC8C052FEBF046", |
||||
"DDC813A7AA07415634F2F6CC10984EF68216C75EA4F7A8E883DBA163C41CE2BA", |
||||
"2D64FC0371D08A7069997C1009814AF9C964DB64AEDB919AC229DA774AB09888", |
||||
"5F6481FC18E4CB223CB5BAB966165A1033349267702E7D75B5A0E5CACEA0E6A0", |
||||
"312170A9C271F67D09C8BE06A468106505CF6B7CD4DB1A40E02AF13213069EB0", |
||||
"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141" |
||||
], |
||||
"valid_test_cases": [ |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2], |
||||
"aggnonce": "022265D41FDEFDC64072C7E168B345D547208C6E02E4D76E2F0D91C0773CF9FC250230B496E7FC1C45EFFB0687CFFC556FDA507F69CAE9894022828903DC3198DAFE", |
||||
"tweak_indices": [], |
||||
"is_xonly": [], |
||||
"psig_indices": [0, 1, 2], |
||||
"expected": "8471BE6E49D0E43097DD32DA374039149F5D00165A8AD369AE86E362D13730DA14A93293A0FFF4F9FDD438415DA4FDB4B008B2EB730110600208D3E1EC0945AC", |
||||
"comment": "Signing with minimum number of participants" |
||||
}, |
||||
{ |
||||
"id_indices": [2, 0, 1], |
||||
"pubshare_indices": [2, 0, 1], |
||||
"pubnonce_indices": [2, 0, 1], |
||||
"aggnonce": "022265D41FDEFDC64072C7E168B345D547208C6E02E4D76E2F0D91C0773CF9FC250230B496E7FC1C45EFFB0687CFFC556FDA507F69CAE9894022828903DC3198DAFE", |
||||
"tweak_indices": [], |
||||
"is_xonly": [], |
||||
"psig_indices": [2, 0, 1], |
||||
"expected": "8471BE6E49D0E43097DD32DA374039149F5D00165A8AD369AE86E362D13730DA14A93293A0FFF4F9FDD438415DA4FDB4B008B2EB730110600208D3E1EC0945AC", |
||||
"comment": "Order of the singer set shouldn't affect the aggregate signature. The expected value must match the previous test vector. " |
||||
}, |
||||
{ |
||||
"id_indices": [1, 3, 4], |
||||
"pubshare_indices": [1, 3, 4], |
||||
"pubnonce_indices": [1, 3, 4], |
||||
"aggnonce": "0248A0DA464AB5C69B7C0159A1D3773478D606AE3BFE38AC26556B3B4E5FA47668023848EEDE8406CDE99E2CA52D2135D9AC31BC5636DE8452C597A77611CBA9AFCC", |
||||
"tweak_indices": [0], |
||||
"is_xonly": [false], |
||||
"psig_indices": [3, 4, 5], |
||||
"expected": "BA3388FF06D512B23065196A8F8673EA2A6DBAE6714A3E634C258E176A009172462472BB88A65538F17BF5DC433EC01C1770CA5F233A2718662EA1019FDC3BB8", |
||||
"comment": "Signing with tweaked group public key" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2, 3], |
||||
"pubshare_indices": [0, 1, 2, 3], |
||||
"pubnonce_indices": [0, 1, 2, 3], |
||||
"aggnonce": "03EE8C3A0DCB63B05B93370561E80BDA65BDB7412BD947F8CED8CE0B5574D87FC002D5E954284D0198FC64FFD0ABB50DF8B0C3A6B2B369A5DB3E318A058482B29BA1", |
||||
"tweak_indices": [0, 1, 2], |
||||
"is_xonly": [true, false, true], |
||||
"psig_indices": [6, 7, 8, 9], |
||||
"expected": "143C2B3A3F4847D0D9FA3D3D7EEF6135148345C0BB620707334ADF5F1395A17BB02DA3FEDB9108179331D06E0BBD34B19E3FFF0893616A2310D47F73077C5CD5", |
||||
"comment": "" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2, 3, 4], |
||||
"pubshare_indices": [0, 1, 2, 3, 4], |
||||
"pubnonce_indices": [0, 1, 2, 3, 4], |
||||
"aggnonce": "03C03DE1E69FABAFE2BC9FF8940CD50BCCA1B5A35CB56A719264F8C93DA006837C03F59B87EEF390D4189504FFDE2EE709372E036DE71E0633A6B1D30D3A10EC6FFE", |
||||
"tweak_indices": [0, 1, 2], |
||||
"is_xonly": [true, false, true], |
||||
"psig_indices": [10, 11, 12, 13, 14], |
||||
"expected": "64F75B69667302B459330DE1221AEF5C8F04C44635E6078ED068344EF04FBA00273493772CABC9C2C87515F916118CCAB2D3902A6F5EAC6F155725B58DFCBBD3", |
||||
"comment": "Signing with max number of participants and tweaked group public key" |
||||
} |
||||
], |
||||
"error_test_cases": [ |
||||
{ |
||||
"id_indices": [0, 1, 2, 3, 4], |
||||
"pubshare_indices": [0, 1, 2, 3, 4], |
||||
"pubnonce_indices": [0, 1, 2, 3, 4], |
||||
"aggnonce": "03C03DE1E69FABAFE2BC9FF8940CD50BCCA1B5A35CB56A719264F8C93DA006837C03F59B87EEF390D4189504FFDE2EE709372E036DE71E0633A6B1D30D3A10EC6FFE", |
||||
"tweak_indices": [0, 1, 2], |
||||
"is_xonly": [true, false, true], |
||||
"psig_indices": [10, 11, 15, 13, 14], |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": 3, |
||||
"contrib": "psig" |
||||
}, |
||||
"comment": "Partial signature is invalid because it exceeds group size" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2, 3, 4], |
||||
"pubshare_indices": [0, 1, 2, 3, 4], |
||||
"pubnonce_indices": [0, 1, 2, 3, 4], |
||||
"aggnonce": "03C03DE1E69FABAFE2BC9FF8940CD50BCCA1B5A35CB56A719264F8C93DA006837C03F59B87EEF390D4189504FFDE2EE709372E036DE71E0633A6B1D30D3A10EC6FFE", |
||||
"tweak_indices": [0, 1, 2], |
||||
"is_xonly": [true, false, true], |
||||
"psig_indices": [10, 11, 12, 13], |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The psigs and ids arrays must have the same length." |
||||
}, |
||||
"comment": "Partial signature count doesn't match the signer set count" |
||||
} |
||||
] |
||||
} |
||||
@ -0,0 +1,339 @@
|
||||
{ |
||||
"max_participants": 5, |
||||
"min_participants": 3, |
||||
"group_public_key": "037940B3ED1FDC360252A6F48058C7B94276DFB6AA2B7D51706FB48326B19E7AE1", |
||||
"secshare_p1":"81D0D40CDF044588167A987C14552954DB187AC5AD3B1CA40D7B03DCA32AFDFB", |
||||
"identifiers": [1, 2, 3, 4, 5], |
||||
"pubshares": [ |
||||
"02BB66437FCAA01292BFB4BB6F19D67818FE693215C36C4663857F1DC8AB8BF4FA", |
||||
"02C3250013C86AA9C3011CD40B2658CBC5B950DD21FFAA4EDE1BB66E18A063CED5", |
||||
"03259D7068335012C08C5D80E181969ED7FFA08F7973E3ED9C8C0BFF3EC03C223E", |
||||
"02A22971750242F6DA35B8DB0DFE74F38A3227118B296ADD2C65E324E2B7EB20AD", |
||||
"03541293535BB662F8294C4BEB7EA25F55FEAE86C6BAE0CEBD741EAAA28639A6E6", |
||||
"020000000000000000000000000000000000000000000000000000000000000007" |
||||
], |
||||
"secnonces_p1": [ |
||||
"96DF27F46CB6E0399C7A02811F6A4D695BBD7174115477679E956658FF2E83D618E4F670DF3DEB215934E4F68D4EEC71055B87288947D75F6E1EA9037FF62173", |
||||
"00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000" |
||||
], |
||||
"pubnonces": [ |
||||
"02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"02D26EF7E09A4BC0A2CF295720C64BAD56A28EF50B6BECBD59AF6F3ADE6C2480C503D11B9993AE4C2D38EA2591287F7B744976F0F0B79104B96D6399507FC533E893", |
||||
"03C7E3D6456228347B658911BF612967F36C7791C24F9607ADB34E09F8CC1126D803D2C9C6E3D1A11463F8C2D57B145A814F5D44FD1A42F7A024140AC30D48EE0BEE", |
||||
"036409E6BA4A00E148E9BE2D3B4217A74B3A65F0D75489176EF8A7D2BD699B949002B1E9FA2A8AE80CD7CE1593B51402B980B56896DB5B5C2B07EDA2C0CFEB08AD93", |
||||
"02464144C7AFAEF651F63E330B1FFF6EEC43991F9AE75AE6069796C097B04DAE720288B464788E5DFC9C2CCD6A3CCBBED643666749250012DA220D1C9FC559214270", |
||||
"0200000000000000000000000000000000000000000000000000000000000000090287BF891D2A6DEAEBADC909352AA9405D1428C15F4B75F04DAE642A95C2548480", |
||||
"03135EFD879EC3BC76E953758E0611E07057CA4F1EA935E8BA6151ED4696B7827A0397A1B70CF6403286EE0DD153DBFDCFBEE3D7A15745569C097A328C7CCB36E7E5" |
||||
], |
||||
"aggnonces": [ |
||||
"02047C99228CEA528AE200A82CBE4CD188BC67D58F537D1904A16B07FCDE07C3A6038708199DFA5BC5C41A0DD0FBD7D0620ADB4AC9991F7DB55A155CE9396AA80D1A", |
||||
"0365B60FA963FCB2ED1454264942397DBFC244A4B6CBE8FDEAF6FB23F14F76610002433AB9A295A67CD2B45B001B6F8154DC6619C994776EBF65A3C88A4BC94DBC98", |
||||
"03AB37C47419536990037B903428008878E4F395823A135C2B39E67FA850CFF41F028967ECFE399759125F59F7142B6580D91F70DE1C9E9C6B0F56754B64370A4438", |
||||
"0353365AF75F7C246089940D57D3265947A1D27576E411AE9C98702516C72DB51B02F5483E63F474BDD8EAC03F99276ED5A2ED31786F5B0F1A8706BE7367BC1D4555", |
||||
"000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", |
||||
"048465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61037496A3CC86926D452CAFCFD55D25972CA1675D549310DE296BFF42F72EEEA8C9", |
||||
"028465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61020000000000000000000000000000000000000000000000000000000000000009", |
||||
"028465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD6102FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC30" |
||||
], |
||||
"msgs": [ |
||||
"F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF", |
||||
"", |
||||
"2626262626262626262626262626262626262626262626262626262626262626262626262626" |
||||
], |
||||
"valid_test_cases": [ |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"expected": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC", |
||||
"comment": "Signing with minimum number of participants" |
||||
}, |
||||
{ |
||||
"id_indices": [1, 0, 2], |
||||
"pubshare_indices": [1, 0, 2], |
||||
"pubnonce_indices": [1, 0, 2], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 0, |
||||
"signer_index": 1, |
||||
"expected": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC", |
||||
"comment": "Partial-signature shouldn't change if the order of signers set changes (without changing secnonces)" |
||||
}, |
||||
{ |
||||
"id_indices": [2, 1, 0], |
||||
"pubshare_indices": [2, 1, 0], |
||||
"pubnonce_indices": [2, 1, 0], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 0, |
||||
"signer_index": 2, |
||||
"expected": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC", |
||||
"comment": "Partial-signature shouldn't change if the order of signers set changes (without changing secnonces)" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 3, 4], |
||||
"pubshare_indices": [0, 3, 4], |
||||
"pubnonce_indices": [0, 3, 4], |
||||
"aggnonce_index": 1, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"expected": "599723B8E16DA7D67A43EB09E6A990BF5BA7CD441657FE14D654E8C0523D0814", |
||||
"comment": "Partial-signature changes if the members of signers set changes" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2, 3], |
||||
"pubshare_indices": [0, 1, 2, 3], |
||||
"pubnonce_indices": [0, 1, 2, 3], |
||||
"aggnonce_index": 2, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"expected": "6762C37ABF433C029E6698B435D5F7BE634D7B64A8151ACB07402465DB7D4057" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2, 3, 4], |
||||
"pubshare_indices": [0, 1, 2, 3, 4], |
||||
"pubnonce_indices": [0, 1, 2, 3, 4], |
||||
"aggnonce_index": 3, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"expected": "32D17330BF21D4D058E52A07E86F21D653ED697C1CCFE6F4D17084EF5C99FF18", |
||||
"comment": "Signing with maximum number of participants" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [0, 1, 6], |
||||
"aggnonce_index": 4, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"expected": "16B1E11E2BB93911E0422715FD03C0C8F1B7845B6A69F8BB8AB90155D91C25B5", |
||||
"comment": "Both halves of aggregate nonce correspond to point at infinity" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 1, |
||||
"signer_index": 0, |
||||
"expected": "E1915436DC7D4BC162842A3C1BAA16E82A8D64056A02C5D2BD75784B604C23CD", |
||||
"comment": "Empty message" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 2, |
||||
"signer_index": 0, |
||||
"expected": "78E04F68D813CBAF68CB5E19A835C69B833138ED18BDE63CB399F52F559EAA17", |
||||
"comment": "Message longer than 32 bytes (38-byte msg)" |
||||
} |
||||
], |
||||
"sign_error_test_cases": [ |
||||
{ |
||||
"id_indices": [3, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 0, |
||||
"signer_index": null, |
||||
"signer_id": 1, |
||||
"secnonce_index": 0, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The signer's id must be present in the participant identifier list." |
||||
}, |
||||
"comment": "The signer's id is not in the participant identifier list." |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2, 1], |
||||
"pubshare_indices": [0, 1, 2, 1], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"secnonce_index": 0, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The participant identifier list must contain unique elements." |
||||
}, |
||||
"comment": "The participant identifier list contains duplicate elements." |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [3, 1, 2], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"secnonce_index": 0, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The signer's pubshare must be included in the list of pubshares." |
||||
}, |
||||
"comment": "The signer's pubshare is not in the list of pubshares. This test case is optional: it can be skipped by implementations that do not check that the signer's pubshare is included in the list of pubshares." |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"secnonce_index": 0, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The pubshares and ids arrays must have the same length." |
||||
}, |
||||
"comment": "The participant identifiers count exceed the participant public shares count" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 5], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"secnonce_index": 0, |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": 3, |
||||
"contrib": "pubshare" |
||||
}, |
||||
"comment": "Signer 3 provided an invalid participant public share" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"aggnonce_index": 5, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"secnonce_index": 0, |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": null, |
||||
"contrib": "aggnonce" |
||||
}, |
||||
"comment": "Aggregate nonce is invalid due wrong tag, 0x04, in the first half" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"aggnonce_index": 6, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"secnonce_index": 0, |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": null, |
||||
"contrib": "aggnonce" |
||||
}, |
||||
"comment": "Aggregate nonce is invalid because the second half does not correspond to an X coordinate" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"aggnonce_index": 7, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"secnonce_index": 0, |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": null, |
||||
"contrib": "aggnonce" |
||||
}, |
||||
"comment": "Aggregate nonce is invalid because second half exceeds field size" |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"aggnonce_index": 0, |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"secnonce_index": 1, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "first secnonce value is out of range." |
||||
}, |
||||
"comment": "Secnonce is invalid which may indicate nonce reuse" |
||||
} |
||||
], |
||||
"verify_fail_test_cases": [ |
||||
{ |
||||
"psig": "21255BB1924800E4BF273455BB20C07239F15FFA4526F218CC8386E10A981655", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"comment": "Wrong signature (which is equal to the negation of valid signature)" |
||||
}, |
||||
{ |
||||
"psig": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2], |
||||
"msg_index": 0, |
||||
"signer_index": 1, |
||||
"comment": "Wrong signer" |
||||
}, |
||||
{ |
||||
"psig": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [3, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"comment": "The signer's pubshare is not in the list of pubshares" |
||||
}, |
||||
{ |
||||
"psig": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"comment": "Signature exceeds group size" |
||||
} |
||||
], |
||||
"verify_error_test_cases": [ |
||||
{ |
||||
"psig": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [5, 1, 2], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": 1, |
||||
"contrib": "pubnonce" |
||||
}, |
||||
"comment": "Invalid pubnonce" |
||||
}, |
||||
{ |
||||
"psig": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [5, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"error": { |
||||
"type": "invalid_contribution", |
||||
"signer_id": 1, |
||||
"contrib": "pubshare" |
||||
}, |
||||
"comment": "Invalid pubshare" |
||||
}, |
||||
{ |
||||
"psig": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC", |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2, 3], |
||||
"msg_index": 0, |
||||
"signer_index": 0, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The ids, pubnonces and pubshares arrays must have the same length." |
||||
}, |
||||
"comment": "public nonces count is greater than ids and pubshares" |
||||
} |
||||
] |
||||
} |
||||
@ -0,0 +1,164 @@
|
||||
{ |
||||
"max_participants": 5, |
||||
"min_participants": 3, |
||||
"group_public_key": "037940B3ED1FDC360252A6F48058C7B94276DFB6AA2B7D51706FB48326B19E7AE1", |
||||
"secshare_p1":"81D0D40CDF044588167A987C14552954DB187AC5AD3B1CA40D7B03DCA32AFDFB", |
||||
"identifiers": [1, 2, 3, 4, 5], |
||||
"pubshares": [ |
||||
"02BB66437FCAA01292BFB4BB6F19D67818FE693215C36C4663857F1DC8AB8BF4FA", |
||||
"02C3250013C86AA9C3011CD40B2658CBC5B950DD21FFAA4EDE1BB66E18A063CED5", |
||||
"03259D7068335012C08C5D80E181969ED7FFA08F7973E3ED9C8C0BFF3EC03C223E", |
||||
"02A22971750242F6DA35B8DB0DFE74F38A3227118B296ADD2C65E324E2B7EB20AD", |
||||
"03541293535BB662F8294C4BEB7EA25F55FEAE86C6BAE0CEBD741EAAA28639A6E6" |
||||
], |
||||
"secnonce_p1":"96DF27F46CB6E0399C7A02811F6A4D695BBD7174115477679E956658FF2E83D618E4F670DF3DEB215934E4F68D4EEC71055B87288947D75F6E1EA9037FF62173", |
||||
"pubnonces": [ |
||||
"02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9", |
||||
"02D26EF7E09A4BC0A2CF295720C64BAD56A28EF50B6BECBD59AF6F3ADE6C2480C503D11B9993AE4C2D38EA2591287F7B744976F0F0B79104B96D6399507FC533E893", |
||||
"03C7E3D6456228347B658911BF612967F36C7791C24F9607ADB34E09F8CC1126D803D2C9C6E3D1A11463F8C2D57B145A814F5D44FD1A42F7A024140AC30D48EE0BEE", |
||||
"036409E6BA4A00E148E9BE2D3B4217A74B3A65F0D75489176EF8A7D2BD699B949002B1E9FA2A8AE80CD7CE1593B51402B980B56896DB5B5C2B07EDA2C0CFEB08AD93", |
||||
"02464144C7AFAEF651F63E330B1FFF6EEC43991F9AE75AE6069796C097B04DAE720288B464788E5DFC9C2CCD6A3CCBBED643666749250012DA220D1C9FC559214270" |
||||
], |
||||
"aggnonces": [ |
||||
"02047C99228CEA528AE200A82CBE4CD188BC67D58F537D1904A16B07FCDE07C3A6038708199DFA5BC5C41A0DD0FBD7D0620ADB4AC9991F7DB55A155CE9396AA80D1A", |
||||
"03AB37C47419536990037B903428008878E4F395823A135C2B39E67FA850CFF41F028967ECFE399759125F59F7142B6580D91F70DE1C9E9C6B0F56754B64370A4438", |
||||
"0353365AF75F7C246089940D57D3265947A1D27576E411AE9C98702516C72DB51B02F5483E63F474BDD8EAC03F99276ED5A2ED31786F5B0F1A8706BE7367BC1D4555" |
||||
], |
||||
"tweaks": [ |
||||
"E8F791FF9225A2AF0102AFFF4A9A723D9612A682A25EBE79802B263CDFCD83BB", |
||||
"AE2EA797CC0FE72AC5B97B97F3C6957D7E4199A167A58EB08BCAFFDA70AC0455", |
||||
"F52ECBC565B3D8BEA2DFD5B75A4F457E54369809322E4120831626F290FA87E0", |
||||
"1969AD73CC177FA0B4FCED6DF1F7BF9907E665FDE9BA196A74FED0A3CF5AEF9D", |
||||
"FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141" |
||||
], |
||||
"msg": "F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF", |
||||
"valid_test_cases": [ |
||||
{ |
||||
"id_indices": [1, 2, 0], |
||||
"pubshare_indices": [1, 2, 0], |
||||
"pubnonce_indices": [1, 2, 0], |
||||
"tweak_indices": [], |
||||
"is_xonly": [], |
||||
"aggnonce_index": 0, |
||||
"signer_index": 2, |
||||
"expected": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC", |
||||
"comment": "No tweak. The expected value (partial sig) must match the signing with untweaked group public key." |
||||
}, |
||||
{ |
||||
"id_indices": [1, 2, 0], |
||||
"pubshare_indices": [1, 2, 0], |
||||
"pubnonce_indices": [1, 2, 0], |
||||
"tweak_indices": [0], |
||||
"is_xonly": [true], |
||||
"aggnonce_index": 0, |
||||
"signer_index": 2, |
||||
"expected": "00A84851A7D3F53B94FDFDE0BE6C6DCE570B7FF27E8B77FDF75AFF52066F42EE", |
||||
"comment": "A single x-only tweak" |
||||
}, |
||||
{ |
||||
"id_indices": [1, 2, 0], |
||||
"pubshare_indices": [1, 2, 0], |
||||
"pubnonce_indices": [1, 2, 0], |
||||
"tweak_indices": [0], |
||||
"is_xonly": [false], |
||||
"aggnonce_index": 0, |
||||
"signer_index": 2, |
||||
"expected": "FC2D7852AAEF8F3C229FEC7E6B496999C52857387E4274CD2F7625CD4B262D73", |
||||
"comment": "A single plain tweak" |
||||
}, |
||||
{ |
||||
"id_indices": [1, 2, 0], |
||||
"pubshare_indices": [1, 2, 0], |
||||
"pubnonce_indices": [1, 2, 0], |
||||
"tweak_indices": [0, 1], |
||||
"aggnonce_index": 0, |
||||
"is_xonly": [false, true], |
||||
"signer_index": 2, |
||||
"expected": "1634928A5951F23E77DB9D6171E89A04E55B2BC07A492CFE68B611303C96957A", |
||||
"comment": "A plain tweak followed by an x-only tweak" |
||||
}, |
||||
{ |
||||
"id_indices": [1, 2, 0], |
||||
"pubshare_indices": [1, 2, 0], |
||||
"pubnonce_indices": [1, 2, 0], |
||||
"tweak_indices": [0, 1, 2, 3], |
||||
"aggnonce_index": 0, |
||||
"is_xonly": [true, false, true, false], |
||||
"signer_index": 2, |
||||
"expected": "4252C4EA9641F1B8C502F3B63C3D0AFEF3274CFE7C70D94AE2F2DC54FA16D216", |
||||
"comment": "Four tweaks: x-only, plain, x-only, plain. If an implementation prohibits applying plain tweaks after x-only tweaks, it can skip this test vector or return an error." |
||||
}, |
||||
{ |
||||
"id_indices": [1, 2, 0], |
||||
"pubshare_indices": [1, 2, 0], |
||||
"pubnonce_indices": [1, 2, 0], |
||||
"tweak_indices": [0, 1, 2, 3], |
||||
"aggnonce_index": 0, |
||||
"is_xonly": [false, false, true, true], |
||||
"signer_index": 2, |
||||
"expected": "CF079FD835F00CF6A737FDC19D602AA445C95825B6A5D1C0FFB32A848427F49E", |
||||
"comment": "Four tweaks: plain, plain, x-only, x-only." |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2], |
||||
"pubshare_indices": [0, 1, 2], |
||||
"pubnonce_indices": [0, 1, 2], |
||||
"tweak_indices": [0, 1, 2, 3], |
||||
"aggnonce_index": 0, |
||||
"is_xonly": [false, false, true, true], |
||||
"signer_index": 0, |
||||
"expected": "CF079FD835F00CF6A737FDC19D602AA445C95825B6A5D1C0FFB32A848427F49E", |
||||
"comment": "Order of the signers shouldn't affect tweaking. The expected value (partial sig) must match the previous test vector." |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2, 3], |
||||
"pubshare_indices": [0, 1, 2, 3], |
||||
"pubnonce_indices": [0, 1, 2, 3], |
||||
"tweak_indices": [0, 1, 2, 3], |
||||
"aggnonce_index": 1, |
||||
"is_xonly": [false, false, true, true], |
||||
"signer_index": 0, |
||||
"expected": "22B8AE565FB2A52E07F1D6D0B5F85DD16932ADF77C0D61C473554133C22EE617", |
||||
"comment": "Number of the signers won't affect tweaking but the expected value (partial sig) will change because of interpolating value." |
||||
}, |
||||
{ |
||||
"id_indices": [0, 1, 2, 3, 4], |
||||
"pubshare_indices": [0, 1, 2, 3, 4], |
||||
"pubnonce_indices": [0, 1, 2, 3, 4], |
||||
"tweak_indices": [0, 1, 2, 3], |
||||
"aggnonce_index": 2, |
||||
"is_xonly": [false, false, true, true], |
||||
"signer_index": 0, |
||||
"expected": "7BCA92625F1C83D1EE6A855A198D25410BBE3867E2B61400A02D12BA2D6E2384", |
||||
"comment": "Tweaking with maximum possible signers" |
||||
} |
||||
], |
||||
"error_test_cases": [ |
||||
{ |
||||
"id_indices": [1, 2, 0], |
||||
"pubshare_indices": [1, 2, 0], |
||||
"tweak_indices": [4], |
||||
"aggnonce_index": 0, |
||||
"is_xonly": [false], |
||||
"signer_index": 2, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The tweak must be less than n." |
||||
}, |
||||
"comment": "Tweak is invalid because it exceeds group size" |
||||
}, |
||||
{ |
||||
"id_indices": [1, 2, 0], |
||||
"pubshare_indices": [1, 2, 0], |
||||
"tweak_indices": [0, 1, 2, 3], |
||||
"aggnonce_index": 0, |
||||
"is_xonly": [false, false], |
||||
"signer_index": 2, |
||||
"error": { |
||||
"type": "value", |
||||
"message": "The tweaks and is_xonly arrays must have the same length." |
||||
}, |
||||
"comment": "Tweaks count doesn't match the tweak modes count" |
||||
} |
||||
] |
||||
} |
||||
Loading…
Reference in new issue