diff --git a/.gitignore b/.gitignore
index 8ebd107..83a2dfb 100644
--- a/.gitignore
+++ b/.gitignore
@@ -17,7 +17,7 @@ bobkey
commitments_debug.txt
dummyext
deps/
-jmvenv/
+jmvenv*/
logs/
miniircd/
miniircd.tar.gz
diff --git a/conftest.py b/conftest.py
index 5df761f..6a44a0f 100644
--- a/conftest.py
+++ b/conftest.py
@@ -75,7 +75,7 @@ def pytest_addoption(parser: Any) -> None:
default='bitcoinrpc',
help="the RPC username for your test bitcoin instance (default=bitcoinrpc)")
parser.addoption("--nirc",
- type=int,
+ type="int",
action="store",
default=1,
help="the number of local miniircd instances")
diff --git a/docs/frost-wallet-dev.md b/docs/frost-wallet-dev.md
new file mode 100644
index 0000000..6a1ba7d
--- /dev/null
+++ b/docs/frost-wallet-dev.md
@@ -0,0 +1,119 @@
+# FROST P2TR wallet development details
+
+**NOTE**: minimal python version is python3.12
+
+## FrostWallet storages
+`FrostWallet` have two additional storages in addtion to wallet `Storage`:
+- `DKGStorage` with DKG data
+- `DKGRecoveryStorage` with DKG recovery data (unencrypted)
+
+They are loaded only for DKG/FROST support and not loaded on usual wallet
+usage.
+
+Usual wallet usage interact with FROST/DKG functionality via IPC code in
+`frost_ipc.py` (currently `AF_UNIX` socket for simplicity).
+
+## Structure of DKG data in the DKGStorage
+
+```
+"dkg": {
+ "sessions": {
+ "md_type_idx": session_id,
+ ...
+ },
+ "pubkey": {
+ "session_id": threshold_pubkey,
+ ...
+ },
+ "pubshares": {
+ "session_id": [pubshare1, pubshare2, ...],
+ ...
+ },
+ "secshare": {
+ "session_id": secshare,
+ ...
+ },
+ "hostpubkeys": {
+ "session_id": [hostpubkey1, hostpubkey2, ...],
+ ...
+ },
+ "t": {
+ "session_id": t,
+ ...
+ }
+}
+```
+Where `md_type_idx` is a serialization in bytes of `mixdepth`, `address_type`,
+`index` of pubkey as in the HD wallet derivations.
+
+## Overall information
+In the code used twisted `asyncioreactor` in place of standard twisted reactor.
+Initialization is done as early as possible in `jmclient/__init__.py`.
+Classes for wallets: `TaprootWallet`, `FrostWallet` in the `jmclient/wallet.py`
+Utility class `DKGManager` in the `jmclient/wallet.py`.
+Engine classes `BTC_P2TR(BTCEngine)`, `BTC_P2TR_FROST(BTC_P2TR)` in the
+`jmclient/cryptoengine.py`.
+
+## `scripts/wallet-tool.py` commands
+
+- `hostpubkey`: display host public key
+- `servefrost`: run only as DKG/FROST support (separate process which need
+to be run permanently)
+- `dkgrecover`: recover DKG sessions from DKG recovery file
+- `dkgls`: display FrostWallet DKG data
+- `dkgrm`: rm FrostWallet DKG data by `session_id` list
+- `recdkgls`: display Recovery DKG File data
+- `recdkgrm`: rm Recovery DKG File data by `session_id` list
+- `testfrost`: run only as test of FROST signing
+
+## Description of `jmclient/frost_clients.py`
+
+- `class DKGClient`: clent which support only DKG sessions over JM channels.
+Uses `chilldkg` reference code from
+https://github.com/BlockstreamResearch/bip-frost-dkg/, placed in the
+`jmfrost/chilldkg_ref` package.
+
+Uses channel level commands `dkginit`, `dkgpmsg1`, `dkgcmsg1`, `dkgpmsg2`,
+`dkgcmsg2`, `dkgfinalized` added to `jmdaemon/protocol.py`.
+
+NOTE: `dkgfinalized` is used to ensure all DKG party saw `dkgcmsg2` and
+saved DKG data to wallet/recovery data.
+
+Commands in the `jmbase/commands.py`: `JMDKGInit`, `JMDKGPMsg1`, `JMDKGCMsg1`,
+`JMDKGPMsg2`, `MDKGCMsg2`, `JMDKGFinalized`, `JMDKGInitSeen`, `JMDKGPMsg1Seen`,
+`JMDKGCMsg1Seen`, `JMDKGPMsg2Seen`, `JMDKGCMsg2Seen`, `JMDKGFinalizedSeen`.
+
+Responders on the commands in the `jmclient/client_protocol.py`,
+`jmdaemon/daemon_protocol.py`.
+
+In the DKG sessions the party which need new pubkey is named Coordinator.
+
+- `class FROSTClient(DKGClient)`: clent which support DKG/FROST sessions over
+JM channels. Uses reference FROST code from
+https://github.com/siv2r/bip-frost-signing/, placed in the
+`jmfrost/frost_ref` package.
+
+Uses channel level commands `frostinit`, `frostround1`, `frostround2`,
+`frostagg1` added to `jmdaemon/protocol.py`.
+
+Commands in the `jmbase/commands.py`: `JMFROSTInit`, `JMFROSTRound1`,
+`JMFROSTAgg1`, `JMFROSTRound2`, `JMFROSTInitSeen`, `JMFROSTRound1Seen`,
+`JMFROSTAgg1Seen`, `JMFROSTRound2Seen`.
+
+Responders on the commands in the `jmclient/client_protocol.py`,
+`jmdaemon/daemon_protocol.py`.
+
+In the FROST sessions the party which need new signature is named Coordinator.
+
+## Recovery storage, recovery data file.
+ChillDKG recovery data is placed in the unencrypted recovery file with
+the name `wallet.jmdat.dkg_recovery`. Code of `class DKGRecoveryStorage` is
+placed in `jmclient/storage.py`
+
+## Utility scripts
+Currently changes in the code allow creation of unencrypted wallets, if
+empty password is used.
+- `scripts/bdecode.py`: allow decode wallet/recovery data files to stdout.
+- `scripts/bencode.py`: allow allow encode text file to bencode format.
+Separate options is presented to encode with DKG data file magic or DKG
+recovery data file magic.
diff --git a/docs/frost-wallet.md b/docs/frost-wallet.md
new file mode 100644
index 0000000..19f7e53
--- /dev/null
+++ b/docs/frost-wallet.md
@@ -0,0 +1,68 @@
+# FROST P2TR wallet usage
+
+**NOTE**: minimal python version is python3.12
+
+To use FROST P2TR wallet you need (example for 2 of 3 FROST signing):
+
+1. Add `txindex=1` to `bitcoin.conf`. This options is need to get non-wallet
+transactions with `getrawtransaction`. This data is need to perform signing
+of P2TR inputs.
+
+2. Set `frost = true` in the `POLICY` section of `joinmarket.cfg`:
+```
+[POLICY]
+...
+# Use FROST P2TR SegWit wallet
+frost = true
+```
+
+3. Create bitcoind watchonly descriptors wallet:
+```
+bitcoin-cli createwallet "wallet_name" true true
+```
+where `true true` is:
+> `disable_private_keys`
+> Disable the possibility of private keys
+> (only watchonlys are possible in this mode).
+
+> `blank`
+> Create a blank wallet. A blank wallet has no keys or HD seed.
+
+4. Get `hostpubkey` for wallet by running:
+```
+scripts/wallet-tool.py wallet.jmdat hostpubkey
+...
+021e99d8193b95da10f514556e98882bc2cebfd0ee0711fa71006cbc9e9a135b43
+```
+
+5. Repeat steps 1-4 for other FROST group wallets.
+
+6. Gather hostpubkeys from step 4 and place to the `FROST` section
+of `joinmarket.cfg` as the `hostpubkeys` value separated by `,`.
+
+7. Add `t` (threshold) value to the `FROST` section of `joinmarket.cfg`:
+```
+[FROST]
+hostpubkeys = 021e99d8193b95da...,03a2f4ce928da0f5...,02a1e2ee50187f3e...
+t = 2
+```
+
+8. Run permanent FROST processes with `servefrost` command on `wallet1`,
+`wallet2`, `wallet3`:
+```
+scripts/wallet-tool.py wallet.jmdat servefrost
+```
+
+9. Run `display` command on `wallet1`
+```
+scripts/wallet-tool.py wallet.jmdat display
+```
+The process of DKG sessions will start to generate pubkeys for the
+wallet addresses. This can take several minutes.
+
+10. Repeat step 9 to generate pubkeys for `wallet2`, `wallet3`.
+
+11. Test FROST signing with `testfrost` command
+```
+scripts/wallet-tool.py wallet.jmdat testfrost
+```
diff --git a/docs/taproot-wallet.md b/docs/taproot-wallet.md
new file mode 100644
index 0000000..1de784b
--- /dev/null
+++ b/docs/taproot-wallet.md
@@ -0,0 +1,27 @@
+# Taproot P2TR wallet usage
+
+To use P2TR wallet you need:
+
+1. Add `txindex=1` to `bitcoin.conf`. This options is need to get non-wallet
+transactions with `getrawtransaction`. This data is need to perform signing
+of P2TR inputs.
+
+2. Set `taproot = true` in the `POLICY` section of `joinmarket.cfg`:
+```
+[POLICY]
+...
+# Use Taproot P2TR SegWit wallet
+taproot = true
+```
+
+3. Create bitcoind watchonly descriptors wallet:
+```
+bitcoin-cli createwallet "wallet_name" true true
+```
+where `true true` is:
+> `disable_private_keys`
+> Disable the possibility of private keys
+> (only watchonlys are possible in this mode).
+
+> `blank`
+> Create a blank wallet. A blank wallet has no keys or HD seed.
diff --git a/pyproject.toml b/pyproject.toml
index d198913..2846b66 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -13,13 +13,13 @@ dependencies = [
"chromalog==1.0.5",
"cryptography==42.0.4",
"service-identity==21.1.0",
- "twisted==24.7.0",
+ "twisted@git+https://github.com/zebra-lucky/twisted@fix_from_pr11890#egg=twisted",
"txtorcon==23.11.0",
]
[project.optional-dependencies]
jmbitcoin = [
- "python-bitcointx==1.1.5",
+ "python-bitcointx@git+https://github.com/zebra-lucky/python-bitcointx@disable_contextvars#egg=python-bitcointx",
]
jmclient = [
"argon2_cffi==21.3.0",
@@ -37,11 +37,10 @@ jmdaemon = [
jmfrost = [
]
jmqtui = [
- "PyQt5!=5.15.0,!=5.15.1,!=5.15.2,!=6.0",
- "PySide2!=5.15.0,!=5.15.1,!=5.15.2,!=6.0", # https://bugreports.qt.io/browse/QTBUG-88688
+ "PySide6==6.9.0", # https://bugreports.qt.io/browse/QTBUG-88688
"qrcode[pil]==7.3.1",
'pywin32; platform_system == "Windows"',
- "qt5reactor==0.6.3",
+ "qt5reactor@git+https://github.com/zebra-lucky/qt5reactor@update_versioneer#egg=qt5reactor",
]
client = [
"joinmarket[jmclient]",
diff --git a/scripts/add-utxo.py b/scripts/add-utxo.py
index 1d6ae83..710a9eb 100755
--- a/scripts/add-utxo.py
+++ b/scripts/add-utxo.py
@@ -5,12 +5,16 @@ users to retry transactions more often without getting banned by
the anti-snooping feature employed by makers.
"""
+import asyncio
import sys
import os
import json
from pprint import pformat
from optparse import OptionParser
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmclient import load_program_config, jm_single,\
open_wallet, WalletService, add_external_commitments, update_commitments,\
PoDLE, get_podle_commitments, get_utxo_info, validate_utxo_data, quit,\
@@ -48,7 +52,7 @@ def add_ext_commitments(utxo_datas):
ecs[u]['reveal'][j] = {'P2':P2, 's':s, 'e':e}
add_external_commitments(ecs)
-def main():
+async def main():
parser = OptionParser(
usage=
'usage: %prog [options] [txid:n]',
@@ -171,7 +175,7 @@ def main():
#csv file or json file.
if options.loadwallet:
wallet_path = get_wallet_path(options.loadwallet)
- wallet = open_wallet(wallet_path, gap_limit=options.gaplimit)
+ wallet = await open_wallet(wallet_path, gap_limit=options.gaplimit)
wallet_service = WalletService(wallet)
if wallet_service.rpc_error:
sys.exit(EXIT_FAILURE)
@@ -182,7 +186,8 @@ def main():
# minor note: adding a utxo from an external wallet for commitments, we
# default to not allowing disabled utxos to avoid a privacy leak, so the
# user would have to explicitly enable.
- for md, utxos in wallet_service.get_utxos_by_mixdepth().items():
+ _utxos = await wallet_service.get_utxos_by_mixdepth()
+ for md, utxos in _utxos.items():
for utxo, utxodata in utxos.items():
wif = wallet_service.get_wif_path(utxodata['path'])
utxo_data.append((utxo, wif))
@@ -245,6 +250,12 @@ def main():
assert len(utxo_data)
add_ext_commitments(utxo_data)
-if __name__ == "__main__":
- main()
+async def _main():
+ await main()
jmprint('done', "success")
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/bdecode.py b/scripts/bdecode.py
new file mode 100755
index 0000000..2e01bca
--- /dev/null
+++ b/scripts/bdecode.py
@@ -0,0 +1,56 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+
+import bencoder
+import click
+import json
+from pprint import pprint
+
+
+def debyte_list(lst):
+ res = []
+ for item in lst:
+ if isinstance(item, bytes):
+ item = item.decode('ISO-8859-1')
+ elif isinstance(item, list):
+ item = debyte_list(item)
+ res.append(item)
+ return res
+
+
+def debyte_dict(d):
+ res = {}
+ for k, v in d.items():
+ if isinstance(k, bytes):
+ k = k.decode('ISO-8859-1')
+ if isinstance(v, dict):
+ v = debyte_dict(v)
+ elif isinstance(v, bytes):
+ v = v.decode('ISO-8859-1')
+ elif isinstance(v, list):
+ v = debyte_list(v)
+ res[k] = v
+ return res
+
+
+CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
+@click.command(context_settings=CONTEXT_SETTINGS)
+@click.option('-i', '--input-file', required=True,
+ help='Input file')
+@click.option('-n', '--no-decode', is_flag=True, default=False,
+ help='Do not decode to ISO-8859-1')
+def main(**kwargs):
+ input_file = kwargs.pop('input_file')
+ no_decode = kwargs.pop('no_decode')
+ with open(input_file, 'rb') as fd:
+ data = fd.read()
+ if no_decode:
+ d = bencoder.bdecode(data[8:])
+ pprint(d)
+ else:
+ d = debyte_dict(bencoder.bdecode(data[8:]))
+ print(json.dumps(d, indent=4))
+
+
+if __name__ == '__main__':
+ main()
diff --git a/scripts/bencode.py b/scripts/bencode.py
new file mode 100755
index 0000000..7856902
--- /dev/null
+++ b/scripts/bencode.py
@@ -0,0 +1,69 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+
+import bencoder
+import click
+import json
+from pprint import pprint
+
+
+def enbyte_list(lst):
+ res = []
+ for item in lst:
+ if isinstance(item, str):
+ item = item.encode('ISO-8859-1')
+ elif isinstance(item, list):
+ item = enbyte_list(item)
+ res.append(item)
+ return res
+
+
+def enbyte_dict(d):
+ res = {}
+ for k, v in d.items():
+ if isinstance(k, str):
+ k = k.encode('ISO-8859-1')
+ if isinstance(v, dict):
+ v = enbyte_dict(v)
+ elif isinstance(v, str):
+ v = v.encode('ISO-8859-1')
+ elif isinstance(v, list):
+ v = enbyte_list(v)
+ res[k] = v
+ return res
+
+
+CONTEXT_SETTINGS = dict(help_option_names=['-h', '--help'])
+@click.command(context_settings=CONTEXT_SETTINGS)
+@click.option('-i', '--input-file', required=True,
+ help='Unencoded file')
+@click.option('-o', '--output-file', required=True,
+ help='Output file')
+@click.option('-d', '--dkg-magic', is_flag=True, default=False,
+ help='Prepend dkg storage magic')
+@click.option('-r', '--recovery-magic', is_flag=True, default=False,
+ help='Prepend recovery storage magic')
+def main(**kwargs):
+ input_file = kwargs.pop('input_file')
+ output_file = kwargs.pop('output_file')
+ dkg_magic = kwargs.pop('dkg_magic')
+ recovery_magic = kwargs.pop('recovery_magic')
+ if dkg_magic and recovery_magic:
+ raise click.UsageError('Options -d and -r is mutually exclusive')
+ if dkg_magic:
+ MAGIC_UNENC = b'JMDKGDAT'
+ elif recovery_magic:
+ MAGIC_UNENC = b'JMDKGREC'
+ else:
+ MAGIC_UNENC = b'JMWALLET'
+
+ with open(input_file, 'r') as fd:
+ data = json.loads(fd.read())
+ data = enbyte_dict(data)
+
+ with open(output_file, 'wb') as wfd:
+ wfd.write(MAGIC_UNENC + bencoder.bencode(data))
+
+
+if __name__ == '__main__':
+ main()
diff --git a/scripts/bond-calculator.py b/scripts/bond-calculator.py
index 055435e..047d281 100755
--- a/scripts/bond-calculator.py
+++ b/scripts/bond-calculator.py
@@ -1,10 +1,15 @@
#!/usr/bin/env python3
+
+import asyncio
import sys
from datetime import datetime
from decimal import Decimal
from json import loads
from optparse import OptionParser
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import EXIT_ARGERROR, jmprint, get_log, utxostr_to_utxo, EXIT_FAILURE
from jmbitcoin import amount_to_sat, amount_to_str
from jmclient import add_base_options, load_program_config, jm_single, get_bond_values
@@ -24,7 +29,7 @@ with the fidelity bonds in the orderbook.
log = get_log()
-def main() -> None:
+async def main() -> None:
parser = OptionParser(
usage="usage: %prog [options] UTXO or amount",
description=DESCRIPTION,
@@ -128,5 +133,11 @@ def main() -> None:
jmprint(f"Top {result['percentile']}% of the orderbook by value")
+async def _main():
+ await main()
+
+
if __name__ == "__main__":
- main()
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/bumpfee.py b/scripts/bumpfee.py
index 9c27961..84448c7 100755
--- a/scripts/bumpfee.py
+++ b/scripts/bumpfee.py
@@ -1,6 +1,11 @@
#!/usr/bin/env python3
+import asyncio
from decimal import Decimal
+
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import get_log, hextobin, bintohex
from jmbase.support import EXIT_SUCCESS, EXIT_FAILURE, EXIT_ARGERROR, jmprint, cli_prompt_user_yesno
from jmclient import jm_single, load_program_config, open_test_wallet_maybe, get_wallet_path, WalletService
@@ -130,17 +135,18 @@ def prepare_transaction(new_tx, old_tx, wallet):
return (input_scripts, spent_outs)
-def sign_transaction(new_tx, old_tx, wallet_service):
+async def sign_transaction(new_tx, old_tx, wallet_service):
input_scripts, _ = prepare_transaction(new_tx, old_tx, wallet_service.wallet)
- success, msg = wallet_service.sign_tx(new_tx, input_scripts)
+ success, msg = await wallet_service.sign_tx(new_tx, input_scripts)
if not success:
raise RuntimeError("Failed to sign transaction, quitting. Error msg: " + msg)
-def sign_psbt(new_tx, old_tx, wallet_service):
+async def sign_psbt(new_tx, old_tx, wallet_service):
_, spent_outs = prepare_transaction(new_tx, old_tx, wallet_service.wallet)
- unsigned_psbt = wallet_service.create_psbt_from_tx(
+ unsigned_psbt = await wallet_service.create_psbt_from_tx(
new_tx, spent_outs=spent_outs)
- signed_psbt, err = wallet_service.sign_psbt(unsigned_psbt.serialize())
+ signed_psbt, err = await wallet_service.sign_psbt(
+ unsigned_psbt.serialize())
if err:
raise RuntimeError("Failed to sign PSBT, quitting. Error message: " + err)
@@ -199,7 +205,7 @@ def create_bumped_tx(tx, fee_per_kb, wallet, output_index=-1):
tx.vin, tx.vout, nLockTime=tx.nLockTime,
nVersion=tx.nVersion)
-if __name__ == '__main__':
+async def main(self):
(options, args) = parser.parse_args()
load_program_config(config_path=options.datadir)
if len(args) < 2:
@@ -221,7 +227,7 @@ if __name__ == '__main__':
# open the wallet and synchronize it
wallet_path = get_wallet_path(wallet_name, None)
- wallet = open_test_wallet_maybe(
+ wallet = await open_test_wallet_maybe(
wallet_path, wallet_name, options.amtmixdepths - 1,
wallet_password_stdin=options.wallet_password_stdin,
gap_limit=options.gaplimit)
@@ -250,7 +256,7 @@ if __name__ == '__main__':
# sign the transaction
if options.with_psbt:
try:
- psbt = sign_psbt(bumped_tx, orig_tx, wallet_service)
+ psbt = await sign_psbt(bumped_tx, orig_tx, wallet_service)
print("Completed PSBT created: ")
print(wallet_service.human_readable_psbt(psbt))
@@ -263,7 +269,7 @@ if __name__ == '__main__':
sys.exit(EXIT_FAILURE)
else:
try:
- sign_transaction(bumped_tx, orig_tx, wallet_service)
+ await sign_transaction(bumped_tx, orig_tx, wallet_service)
except RuntimeError as e:
jlog.error(str(e))
sys.exit(EXIT_FAILURE)
@@ -284,3 +290,13 @@ if __name__ == '__main__':
jlog.error("Transaction broadcast failed!")
sys.exit(EXIT_FAILURE)
+
+async def _main():
+ await main()
+ reactor.stop()
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/genwallet.py b/scripts/genwallet.py
index ddffaf0..812ca19 100755
--- a/scripts/genwallet.py
+++ b/scripts/genwallet.py
@@ -3,8 +3,13 @@
# A script for noninteractively creating wallets.
# The implementation is similar to wallet_generate_recover_bip39 in jmclient/wallet_utils.py
+import asyncio
import os
from optparse import OptionParser
+
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from pathlib import Path
from jmclient import (
load_program_config, add_base_options, SegwitWalletFidelityBonds, SegwitLegacyWallet,
@@ -14,7 +19,7 @@ from jmbase.support import get_log, jmprint
log = get_log()
-def main():
+async def main():
parser = OptionParser(
usage='usage: %prog [options] wallet_file_name [password]',
description='Create a wallet with the given wallet name and password.'
@@ -45,10 +50,19 @@ def main():
# Fidelity Bonds are not available for segwit legacy wallets
walletclass = SegwitLegacyWallet
entropy = seed and SegwitLegacyWallet.entropy_from_mnemonic(seed)
- wallet = create_wallet(wallet_path, password, wallet_utils.DEFAULT_MIXDEPTH, walletclass, entropy=entropy)
+ wallet = await create_wallet(
+ wallet_path, password, wallet_utils.DEFAULT_MIXDEPTH,
+ walletclass, entropy=entropy)
jmprint("recovery_seed:{}"
.format(wallet.get_mnemonic_words()[0]), "important")
wallet.close()
+
+async def _main():
+ await main()
+
+
if __name__ == "__main__":
- main()
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/joinmarket-qt.py b/scripts/joinmarket-qt.py
index c5435f8..52aa7c9 100755
--- a/scripts/joinmarket-qt.py
+++ b/scripts/joinmarket-qt.py
@@ -19,16 +19,19 @@ Some widgets copied and modified from https://github.com/spesmilo/electrum
along with this program. If not, see .
'''
+import asyncio
import sys, datetime, os, logging
import platform, json, threading, time
from optparse import OptionParser
from typing import Optional, Tuple
-from PySide2 import QtCore
+from PySide6 import QtCore
-from PySide2.QtGui import *
+from PySide6.QtGui import *
-from PySide2.QtWidgets import *
+from PySide6.QtWidgets import *
+
+import PySide6.QtAsyncio as QtAsyncio
if platform.system() == 'Windows':
MONOSPACE_FONT = 'Lucida Console'
@@ -53,7 +56,7 @@ qt5reactor.install()
#Version of this Qt script specifically
JM_GUI_VERSION = '33'
-from jmbase import get_log, stop_reactor, set_custom_stop_reactor
+from jmbase import get_log, stop_reactor, set_custom_stop_reactor, bintohex
from jmbase.support import EXIT_FAILURE, utxo_to_utxostr,\
hextobin, JM_CORE_VERSION
import jmbitcoin as btc
@@ -61,9 +64,9 @@ from jmclient import load_program_config, get_network, update_persist_config,\
open_test_wallet_maybe, get_wallet_path,\
jm_single, validate_address, fidelity_bond_weighted_order_choose, Taker,\
JMClientProtocolFactory, start_reactor, get_schedule, schedule_to_text,\
- get_blockchain_interface_instance, direct_send, WalletService,\
- RegtestBitcoinCoreInterface, tumbler_taker_finished_update,\
- get_tumble_log, restart_wait, tumbler_filter_orders_callback,\
+ get_blockchain_interface_instance, direct_send, WalletService, \
+ RegtestBitcoinCoreInterface, tumbler_taker_finished_update, \
+ get_tumble_log, restart_wait, tumbler_filter_orders_callback, \
wallet_generate_recover_bip39, wallet_display, get_utxos_enabled_disabled,\
NO_ROUNDING, get_max_cj_fee_values, get_default_max_absolute_fee, \
get_default_max_relative_fee, RetryableStorageError, add_base_options, \
@@ -465,7 +468,7 @@ class SpendTab(QWidget):
current_path = os.path.dirname(os.path.realpath(__file__))
firstarg = QFileDialog.getOpenFileName(self,
'Choose Schedule File',
- directory=current_path,
+ dir=current_path,
options=QFileDialog.DontUseNativeDialog)
#TODO validate the schedule
log.debug('Looking for schedule in: ' + str(firstarg))
@@ -653,7 +656,8 @@ class SpendTab(QWidget):
'the transaction after connecting, and shown the\n'
'fees to pay; you can cancel at that point, or by \n'
'pressing "Abort".')
- self.startButton.clicked.connect(self.startSingle)
+ self.startButton.clicked.connect(
+ lambda: asyncio.create_task(self.startSingle()))
self.abortButton = QPushButton('Abort')
self.abortButton.setEnabled(False)
buttons = QHBoxLayout()
@@ -780,7 +784,7 @@ class SpendTab(QWidget):
def errorDirectSend(self, msg):
JMQtMessageBox(self, msg, mbtype="warn", title="Error")
- def startSingle(self):
+ async def startSingle(self):
if not self.spendstate.runstate == 'ready':
log.info("Cannot start join, already running.")
if not self.validateSingleSend():
@@ -801,12 +805,15 @@ class SpendTab(QWidget):
if len(self.changeInput.text().strip()) > 0:
custom_change = str(self.changeInput.text().strip())
try:
- txid = direct_send(mainWindow.wallet_service, mixdepth,
- [(destaddr, amount)],
- accept_callback=self.checkDirectSend,
- info_callback=self.infoDirectSend,
- error_callback=self.errorDirectSend,
- custom_change_addr=custom_change)
+ tx = await direct_send(
+ mainWindow.wallet_service, mixdepth,
+ [(destaddr, amount)],
+ accept_callback=self.checkDirectSend,
+ info_callback=self.infoDirectSend,
+ error_callback=self.errorDirectSend,
+ return_transaction=True,
+ custom_change_addr=custom_change)
+ txid = bintohex(tx.GetTxid()[::-1])
except Exception as e:
JMQtMessageBox(self, e.args[0], title="Error", mbtype="warn")
return
@@ -821,7 +828,7 @@ class SpendTab(QWidget):
if rtxid == txid:
return True
return False
- mainWindow.wallet_service.active_txids.append(txid)
+ mainWindow.wallet_service.active_txs[txid] = tx
mainWindow.wallet_service.register_callbacks([qt_directsend_callback],
txid, cb_type="confirmed")
self.persistTxToHistory(destaddr, self.direct_send_amount, txid)
@@ -920,6 +927,7 @@ class SpendTab(QWidget):
user_callback=self.getMaxCJFees)
log.info("Using maximum coinjoin fee limits per maker of {:.4%}, {} "
"".format(maxcjfee[0], btc.amount_to_str(maxcjfee[1])))
+ wallet = mainWindow.wallet_service.wallet
self.taker = Taker(mainWindow.wallet_service,
self.spendstate.loaded_schedule,
maxcjfee,
@@ -1348,21 +1356,23 @@ class CoinsTab(QWidget):
self.cTW.setSelectionMode(QAbstractItemView.ExtendedSelection)
self.cTW.header().setSectionResizeMode(QHeaderView.Interactive)
self.cTW.header().setStretchLastSection(False)
- self.cTW.on_update = self.updateUtxos
+ self.cTW.on_update = lambda: asyncio.create_task(self.updateUtxos())
vbox = QVBoxLayout()
self.setLayout(vbox)
vbox.setContentsMargins(0,0,0,0)
vbox.setSpacing(0)
vbox.addWidget(self.cTW)
- self.updateUtxos()
+
+ async def async_initUI(self):
+ await self.updateUtxos()
self.show()
def getHeaders(self):
'''Function included in case dynamic in future'''
return ['Txid:n', 'Amount in BTC', 'Address', 'Label']
- def updateUtxos(self):
+ async def updateUtxos(self):
""" Note that this refresh of the display only accesses in-process
utxo database (no sync e.g.) so can be immediate.
"""
@@ -1378,7 +1388,8 @@ class CoinsTab(QWidget):
utxos_enabled = {}
utxos_disabled = {}
for i in range(jm_single().config.getint("GUI", "max_mix_depth")):
- utxos_e, utxos_d = get_utxos_enabled_disabled(mainWindow.wallet_service, i)
+ utxos_e, utxos_d = await get_utxos_enabled_disabled(
+ mainWindow.wallet_service, i)
if utxos_e != {}:
utxos_enabled[i] = utxos_e
if utxos_d != {}:
@@ -1407,7 +1418,8 @@ class CoinsTab(QWidget):
# keys must be utxo format else a coding error:
assert success
s = "{0:.08f}".format(v['value']/1e8)
- a = mainWindow.wallet_service.script_to_addr(v["script"])
+ a = await mainWindow.wallet_service.script_to_addr(
+ v["script"])
item = QTreeWidgetItem([t, s, a, v["label"]])
item.setFont(0, QFont(MONOSPACE_FONT))
#if rows[i][forchange][j][3] != 'new':
@@ -1415,12 +1427,12 @@ class CoinsTab(QWidget):
seq_item.addChild(item)
m_item.setExpanded(True)
- def toggle_utxo_disable(self, txids, idxs):
+ async def toggle_utxo_disable(self, txids, idxs):
for i in range(0, len(txids)):
txid = txids[i]
txid_bytes = hextobin(txid)
mainWindow.wallet_service.toggle_disable_utxo(txid_bytes, idxs[i])
- self.updateUtxos()
+ await self.updateUtxos()
def create_menu(self, position):
# all selected items
@@ -1445,8 +1457,9 @@ class CoinsTab(QWidget):
txid, idx = item.text(0).split(":")
menu = QMenu()
- menu.addAction("Freeze/un-freeze utxo(s) (toggle)",
- lambda: self.toggle_utxo_disable(txids, idxs))
+ menu.addAction(
+ "Freeze/un-freeze utxo(s) (toggle)",
+ lambda: asyncio.create_task(self.toggle_utxo_disable(txids, idxs)))
menu.addAction("Copy transaction id to clipboard",
lambda: app.clipboard().setText(txid))
menu.exec_(self.cTW.viewport().mapToGlobal(position))
@@ -1469,7 +1482,7 @@ class JMWalletTab(QWidget):
v.header().resizeSection(0, 400) # size of "Address" column
v.header().resizeSection(1, 130) # size of "Index" column
v.setSelectionMode(QAbstractItemView.ExtendedSelection)
- v.on_update = self.updateWalletInfo
+ v.on_update = lambda: asyncio.create_task(self.updateWalletInfo())
v.hide()
self.walletTree = v
vbox = QVBoxLayout()
@@ -1480,7 +1493,9 @@ class JMWalletTab(QWidget):
vbox.addWidget(v)
buttons = QWidget()
vbox.addWidget(buttons)
- self.updateWalletInfo()
+
+ async def async_initUI(self):
+ await self.updateWalletInfo()
self.show()
def getHeaders(self):
@@ -1520,9 +1535,11 @@ class JMWalletTab(QWidget):
shortcut=QKeySequence(QKeySequence.Copy))
menu.addAction("Show QR code",
lambda: self.openQRCodePopup(xpub, xpub))
- menu.addAction("Refresh wallet",
- lambda: mainWindow.updateWalletInfo(None, "all"),
- shortcut=QKeySequence(QKeySequence.Refresh))
+ menu.addAction(
+ "Refresh wallet",
+ lambda: asyncio.create_task(
+ mainWindow.updateWalletInfo(None, "all")),
+ shortcut=QKeySequence(QKeySequence.Refresh))
#TODO add more items to context menu
menu.exec_(self.walletTree.viewport().mapToGlobal(position))
@@ -1544,7 +1561,7 @@ class JMWalletTab(QWidget):
bip21_uri = bip21_uri.upper()
self.openQRCodePopup(address, bip21_uri)
- def updateWalletInfo(self, walletinfo=None):
+ async def updateWalletInfo(self, walletinfo=None):
max_mixdepth_count = jm_single().config.getint("GUI", "max_mix_depth")
previous_expand_states = []
@@ -1691,19 +1708,23 @@ class JMMainWindow(QMainWindow):
openWalletAction = QAction('&Open...', self)
openWalletAction.setStatusTip('Open JoinMarket wallet file')
openWalletAction.setShortcut(QKeySequence.Open)
- openWalletAction.triggered.connect(self.openWallet)
+ openWalletAction.triggered.connect(
+ lambda: asyncio.create_task(self.openWallet()))
generateAction = QAction('&Generate...', self)
generateAction.setStatusTip('Generate new wallet')
- generateAction.triggered.connect(self.generateWallet)
+ generateAction.triggered.connect(
+ lambda: asyncio.create_task(self.generateWallet()))
recoverAction = QAction('&Recover...', self)
recoverAction.setStatusTip('Recover wallet from seed phrase')
- recoverAction.triggered.connect(self.recoverWallet)
+ recoverAction.triggered.connect(
+ lambda: asyncio.create_task(self.recoverWallet()))
showSeedAction = QAction('&Show seed', self)
showSeedAction.setStatusTip('Show wallet seed phrase')
showSeedAction.triggered.connect(self.showSeedDialog)
exportPrivAction = QAction('&Export keys', self)
exportPrivAction.setStatusTip('Export all private keys to a file')
- exportPrivAction.triggered.connect(self.exportPrivkeysJson)
+ exportPrivAction.triggered.connect(
+ lambda: asyncio.create_task(self.exportPrivkeysJson()))
changePassAction = QAction('&Change passphrase...', self)
changePassAction.setStatusTip('Change wallet encryption passphrase')
changePassAction.triggered.connect(self.changePassphrase)
@@ -1746,7 +1767,7 @@ class JMMainWindow(QMainWindow):
self.receiver_bip78_dialog = ReceiveBIP78Dialog(
self.startReceiver, self.stopReceiver)
- def startReceiver(self):
+ async def startReceiver(self):
""" Initializes BIP78 Receiving object and
starts the setup of onion service to serve
request.
@@ -1773,6 +1794,8 @@ class JMMainWindow(QMainWindow):
uri_created_callback=self.receiver_bip78_dialog.update_uri,
shutdown_callback=self.receiver_bip78_dialog.process_complete,
mode="gui")
+ await self.backend_receiver.async_init(self.wallet_service, mixdepth,
+ amount, mode="gui")
if not self.bip78daemon:
#First run means we need to start: create daemon;
# the client and its connection are created in the .initiate()
@@ -1785,7 +1808,7 @@ class JMMainWindow(QMainWindow):
jm_coinjoin=False, bip78=True, daemon=True,
gui=True, rs=False)
self.bip78daemon = True
- self.backend_receiver.initiate()
+ await self.backend_receiver.initiate()
return True
def stopReceiver(self):
@@ -1817,7 +1840,7 @@ class JMMainWindow(QMainWindow):
lyt.addWidget(btnbox)
msgbox.exec_()
- def exportPrivkeysJson(self):
+ async def exportPrivkeysJson(self):
if not self.wallet_service:
JMQtMessageBox(self,
"No wallet loaded.",
@@ -1848,7 +1871,7 @@ class JMMainWindow(QMainWindow):
#option for anyone with gaplimit troubles, although
#that is a complete mess for a user, mostly changing
#the gaplimit in the Settings tab should address it.
- rows = get_wallet_printout(self.wallet_service)
+ rows = await get_wallet_printout(self.wallet_service)
addresses = []
for forchange in rows[0]:
for mixdepth in forchange:
@@ -1978,9 +2001,9 @@ class JMMainWindow(QMainWindow):
title="Restart")
self.close()
- def recoverWallet(self):
+ async def recoverWallet(self):
try:
- success = wallet_generate_recover_bip39(
+ success = await wallet_generate_recover_bip39(
"recover", os.path.join(jm_single().datadir, 'wallets'),
"wallet.jmdat",
display_seed_callback=None,
@@ -2002,9 +2025,9 @@ class JMMainWindow(QMainWindow):
JMQtMessageBox(self, 'Wallet saved to ' + self.walletname,
title="Wallet created")
- self.initWallet(seed=self.walletname)
+ await self.initWallet(seed=self.walletname)
- def openWallet(self):
+ async def openWallet(self):
wallet_loaded = False
error_text = ""
@@ -2019,15 +2042,16 @@ class JMMainWindow(QMainWindow):
wallet_path = wallet_file_text
if not os.path.isabs(wallet_path):
wallet_path = os.path.join(jm_single().datadir, 'wallets', wallet_path)
-
try:
- wallet_loaded = mainWindow.loadWalletFromBlockchain(wallet_path, openWalletDialog.passphraseEdit.text(), rethrow=True)
+ wallet_loaded = await mainWindow.loadWalletFromBlockchain(
+ wallet_path, openWalletDialog.passphraseEdit.text(),
+ rethrow=True)
except Exception as e:
error_text = str(e)
else:
break
- def selectWallet(self, testnet_seed=None):
+ async def selectWallet(self, testnet_seed=None):
if jm_single().config.get("BLOCKCHAIN", "blockchain_source") != "regtest":
# guaranteed to exist as load_program_config was called on startup:
wallets_path = os.path.join(jm_single().datadir, 'wallets')
@@ -2049,7 +2073,8 @@ class JMMainWindow(QMainWindow):
return
pwd = str(text).strip()
try:
- decrypted = self.loadWalletFromBlockchain(firstarg[0], pwd)
+ decrypted = await self.loadWalletFromBlockchain(
+ firstarg[0], pwd)
except Exception as e:
JMQtMessageBox(self,
str(e),
@@ -2071,15 +2096,17 @@ class JMMainWindow(QMainWindow):
firstarg = str(testnet_seed)
pwd = None
#ignore return value as there is no decryption failure possible
- self.loadWalletFromBlockchain(firstarg, pwd)
+ await self.loadWalletFromBlockchain(firstarg, pwd)
- def loadWalletFromBlockchain(self, firstarg=None, pwd=None, rethrow=False):
+ async def loadWalletFromBlockchain(self, firstarg=None,
+ pwd=None, rethrow=False):
if firstarg:
wallet_path = get_wallet_path(str(firstarg), None)
try:
- wallet = open_test_wallet_maybe(wallet_path, str(firstarg),
- None, ask_for_password=False, password=pwd.encode('utf-8') if pwd else None,
- gap_limit=jm_single().config.getint("GUI", "gaplimit"))
+ wallet = await open_test_wallet_maybe(
+ wallet_path, str(firstarg), None, ask_for_password=False,
+ password=pwd.encode('utf-8') if pwd else None,
+ gap_limit=jm_single().config.getint("GUI", "gaplimit"))
except RetryableStorageError as e:
if rethrow:
raise e
@@ -2112,8 +2139,8 @@ class JMMainWindow(QMainWindow):
return "error"
if jm_single().bc_interface is None:
- self.centralWidget().widget(0).updateWalletInfo(
- get_wallet_printout(self.wallet_service))
+ await self.centralWidget().widget(0).updateWalletInfo(
+ await get_wallet_printout(self.wallet_service))
return True
# add information callbacks:
@@ -2121,13 +2148,14 @@ class JMMainWindow(QMainWindow):
self.wallet_service.autofreeze_warning_cb = self.autofreeze_warning_cb
self.wallet_service.startService()
self.syncmsg = ""
- self.walletRefresh = task.LoopingCall(self.updateWalletInfo, None, None)
+ self.walletRefresh = task.LoopingCall(
+ self.updateWalletInfo, None, None)
self.walletRefresh.start(5.0)
self.statusBar().showMessage("Reading wallet from blockchain ...")
return True
- def updateWalletInfo(self, txd, txid):
+ async def updateWalletInfo(self, txd, txid):
""" TODO: see use of `jmclient.BaseWallet.process_new_tx` in
`jmclient.WalletService.transaction_monitor`;
we could similarly find the exact utxos to update in the view,
@@ -2158,7 +2186,8 @@ class JMMainWindow(QMainWindow):
[self.updateWalletInfo], None, "all")
self.update_registered = True
try:
- t.updateWalletInfo(get_wallet_printout(self.wallet_service))
+ await t.updateWalletInfo(
+ await get_wallet_printout(self.wallet_service))
except Exception:
# this is very likely to happen in case Core RPC connection goes
# down (but, order of events means it is not deterministic).
@@ -2170,13 +2199,13 @@ class JMMainWindow(QMainWindow):
self.syncmsg = newsyncmsg
self.statusBar().showMessage(self.syncmsg)
- def generateWallet(self):
+ async def generateWallet(self):
log.debug('generating wallet')
if jm_single().config.get("BLOCKCHAIN", "blockchain_source") == "regtest":
seed = self.getTestnetSeed()
- self.selectWallet(testnet_seed=seed)
+ await self.selectWallet(testnet_seed=seed)
else:
- self.initWallet()
+ await self.initWallet()
def checkPassphrase(self):
match = False
@@ -2299,7 +2328,7 @@ class JMMainWindow(QMainWindow):
return None
return str(mnemonic_extension)
- def initWallet(self, seed=None):
+ async def initWallet(self, seed=None):
'''Creates a new wallet if seed not provided.
Initializes by syncing.
'''
@@ -2307,7 +2336,7 @@ class JMMainWindow(QMainWindow):
try:
# guaranteed to exist as load_program_config was called on startup:
wallets_path = os.path.join(jm_single().datadir, 'wallets')
- success = wallet_generate_recover_bip39(
+ success = await wallet_generate_recover_bip39(
"generate", wallets_path, "wallet.jmdat",
display_seed_callback=self.displayWords,
enter_seed_callback=None,
@@ -2327,9 +2356,10 @@ class JMMainWindow(QMainWindow):
JMQtMessageBox(self, 'Wallet saved to ' + self.walletname,
title="Wallet created")
- self.loadWalletFromBlockchain(self.walletname, pwd=self.textpassword)
+ await self.loadWalletFromBlockchain(
+ self.walletname, pwd=self.textpassword)
-def get_wallet_printout(wallet_service):
+async def get_wallet_printout(wallet_service):
"""Given a WalletService object, retrieve the list of
addresses and corresponding balances to be displayed.
We retrieve a WalletView abstraction, and iterate over
@@ -2341,7 +2371,7 @@ def get_wallet_printout(wallet_service):
xpubs: [[xpubext, xpubint], ...]
Bitcoin amounts returned are in btc, not satoshis
"""
- walletview = wallet_display(wallet_service, False, serialized=False)
+ walletview = await wallet_display(wallet_service, False, serialized=False)
rows = []
mbalances = []
xpubs = []
@@ -2415,14 +2445,14 @@ update_config_for_gui()
check_and_start_tor()
-def onTabChange(i):
+async def onTabChange(i, tabWidget):
""" Respond to change of tab.
"""
# TODO: hardcoded literal;
# note that this is needed for an auto-update
# of utxos on the Coins tab only atm.
if i == 2:
- tabWidget.widget(2).updateUtxos()
+ await tabWidget.widget(2).updateUtxos()
#to allow testing of confirm/unconfirm callback for multiple txs
if isinstance(jm_single().bc_interface, RegtestBitcoinCoreInterface):
@@ -2439,43 +2469,52 @@ tumble_log = get_tumble_log(logsdir)
ignored_makers = []
appWindowTitle = 'JoinMarketQt'
from twisted.internet import reactor
+reactor.runReturn()
mainWindow = JMMainWindow(reactor)
-tabWidget = QTabWidget(mainWindow)
-tabWidget.addTab(JMWalletTab(), "JM Wallet")
-tabWidget.addTab(SpendTab(), "Coinjoins")
-tabWidget.addTab(CoinsTab(), "Coins")
-tabWidget.addTab(TxHistoryTab(), "Tx History")
-settingsTab = SettingsTab()
-tabWidget.addTab(settingsTab, "Settings")
-
-mainWindow.resize(600, 500)
-if get_network() == 'testnet':
- suffix = ' - Testnet'
-elif get_network() == 'signet':
- suffix = ' - Signet'
-else:
- suffix = ''
-mainWindow.setWindowTitle(appWindowTitle + suffix)
-tabWidget.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding)
-mainWindow.setCentralWidget(tabWidget)
-tabWidget.currentChanged.connect(onTabChange)
-mainWindow.show()
-reactor.runReturn()
-# Qt does not stop automatically when we stop the qt5reactor, and
-# also we don't want to close without warning the user;
-# patch our stop_reactor method to include the necessary cleanup:
-def qt_shutdown():
- # checking ensures we only fire the close
- # event once even if stop_reactor is called
- # multiple times (which it often is):
- if mainWindow.isVisible():
- mainWindow.unconditional_shutdown = True
- mainWindow.close()
-set_custom_stop_reactor(qt_shutdown)
-
-# Upon launching the app, ask the user to choose a wallet to open
-mainWindow.openWallet()
-
-sys.exit(app.exec_())
+async def main():
+ tabWidget = QTabWidget(mainWindow)
+ jm_wallet_tab = JMWalletTab()
+ await jm_wallet_tab.async_initUI()
+ tabWidget.addTab(jm_wallet_tab, "JM Wallet")
+ tabWidget.addTab(SpendTab(), "Coinjoins")
+ coins_tab = CoinsTab()
+ await coins_tab.async_initUI()
+ tabWidget.addTab(coins_tab, "Coins")
+ tabWidget.addTab(TxHistoryTab(), "Tx History")
+ settingsTab = SettingsTab()
+ tabWidget.addTab(settingsTab, "Settings")
+
+ mainWindow.resize(600, 500)
+ if get_network() == 'testnet':
+ suffix = ' - Testnet'
+ elif get_network() == 'signet':
+ suffix = ' - Signet'
+ else:
+ suffix = ''
+ mainWindow.setWindowTitle(appWindowTitle + suffix)
+ tabWidget.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding)
+ mainWindow.setCentralWidget(tabWidget)
+ tabWidget.currentChanged.connect(
+ lambda i: asyncio.create_task(onTabChange(i, tabWidget)))
+
+ mainWindow.show()
+
+ # Qt does not stop automatically when we stop the qt5reactor, and
+ # also we don't want to close without warning the user;
+ # patch our stop_reactor method to include the necessary cleanup:
+ def qt_shutdown():
+ # checking ensures we only fire the close
+ # event once even if stop_reactor is called
+ # multiple times (which it often is):
+ if mainWindow.isVisible():
+ mainWindow.unconditional_shutdown = True
+ mainWindow.close()
+ set_custom_stop_reactor(qt_shutdown)
+
+ # Upon launching the app, ask the user to choose a wallet to open
+ await mainWindow.openWallet()
+
+QtAsyncio.run(coro=main(), handle_sigint=True, debug=True)
+# sys.exit(app.exec_()) # FIXME?
diff --git a/scripts/obwatch/ob-watcher.py b/scripts/obwatch/ob-watcher.py
index 5638d9c..e3b2afc 100755
--- a/scripts/obwatch/ob-watcher.py
+++ b/scripts/obwatch/ob-watcher.py
@@ -53,7 +53,8 @@ bond_exponent = None
#Initial state: allow only SW offer types
sw0offers = list(filter(lambda x: x[0:3] == 'sw0', offername_list))
swoffers = list(filter(lambda x: x[0:3] == 'swa' or x[0:3] == 'swr', offername_list))
-filtered_offername_list = sw0offers
+troffers = list(filter(lambda x: x[0:3] == 'tra' or x[0:3] == 'trr', offername_list))
+filtered_offername_list = troffers # FIXME allow selection of offers types
rotateObform = '
'
refresh_orderbook_form = ''
@@ -80,7 +81,8 @@ def do_nothing(arg, order, btc_unit, rel_unit):
def ordertype_display(ordertype, order, btc_unit, rel_unit):
ordertypes = {'sw0absoffer': 'Native SW Absolute Fee', 'sw0reloffer': 'Native SW Relative Fee',
- 'swabsoffer': 'SW Absolute Fee', 'swreloffer': 'SW Relative Fee'}
+ 'swabsoffer': 'SW Absolute Fee', 'swreloffer': 'SW Relative Fee',
+ 'trabsoffer': 'Taproot Absolute Fee', 'trreloffer': 'Taproot Relative Fee'}
return ordertypes[ordertype]
@@ -88,13 +90,14 @@ def cjfee_display(cjfee: Union[Decimal, float, int],
order: dict,
btc_unit: str,
rel_unit: str) -> str:
- if order['ordertype'] in ['swabsoffer', 'sw0absoffer']:
+ if order['ordertype'] in ['trabsoffer', 'swabsoffer', 'sw0absoffer']:
val = sat_to_unit(cjfee, html.unescape(btc_unit))
if btc_unit == "BTC":
return "%.8f" % val
else:
return str(val)
- elif order['ordertype'] in ['reloffer', 'swreloffer', 'sw0reloffer']:
+ elif order['ordertype'] in ['trreloffer', 'reloffer', 'swreloffer',
+ 'sw0reloffer']:
return str(Decimal(cjfee) * Decimal(rel_unit_to_factor[rel_unit])) + rel_unit
@@ -245,8 +248,8 @@ class OrderbookPageRequestHeader(http.server.SimpleHTTPRequestHandler):
for row in rows:
o = dict(row)
if 'cjfee' in o:
- if o['ordertype'] == 'swabsoffer'\
- or o['ordertype'] == 'sw0absoffer':
+ if o['ordertype'] in ['trabsoffer', 'swabsoffer',
+ 'sw0absoffer']:
o['cjfee'] = int(o['cjfee'])
else:
o['cjfee'] = str(Decimal(o['cjfee']))
diff --git a/scripts/qtsupport.py b/scripts/qtsupport.py
index 5aec113..d1f6658 100644
--- a/scripts/qtsupport.py
+++ b/scripts/qtsupport.py
@@ -17,12 +17,13 @@ Qt files for the wizard for initiating a tumbler run.
You should have received a copy of the GNU General Public License
along with this program. If not, see .
'''
+import asyncio
import math, logging, qrcode, re, string
from io import BytesIO
-from PySide2 import QtCore
+from PySide6 import QtCore
-from PySide2.QtGui import *
-from PySide2.QtWidgets import *
+from PySide6.QtGui import *
+from PySide6.QtWidgets import *
from bitcointx.core import satoshi_to_coins
from jmbitcoin.amount import amount_to_sat, btc_to_sat, sat_to_str
@@ -188,10 +189,14 @@ def JMQtMessageBox(obj, msg, mbtype='info', title='', detailed_text= None):
b.setText(msg)
b.setDetailedText(detailed_text)
b.setStandardButtons(QMessageBox.Ok)
- retval = b.exec_()
+ b.open(self.message_box_clicked)
else:
mbtypes[mbtype](obj, title, msg)
+ @QtCore.Slot(QMessageBox.StandardButton)
+ def message_box_clicked(self, button_id):
+ print('QMessageBox.StandardButton', button_id)
+
class QtHandler(logging.Handler):
def __init__(self):
@@ -1040,7 +1045,7 @@ class ReceiveBIP78Dialog(QDialog):
self.qr_btn.setVisible(False)
self.btnbox.button(QDialogButtonBox.Cancel).setDisabled(True)
- def start_generate(self):
+ async def start_generate(self):
""" Before starting up the
hidden service and initiating the payment
workflow, disallow starting again; user
@@ -1049,7 +1054,7 @@ class ReceiveBIP78Dialog(QDialog):
aborted, we reset the generate button.
"""
self.generate_btn.setDisabled(True)
- if not self.action_fn():
+ if not await self.action_fn():
self.generate_btn.setDisabled(False)
def get_receive_bip78_dialog(self):
@@ -1110,7 +1115,8 @@ class ReceiveBIP78Dialog(QDialog):
# it is also associated with 'rejection' (and we don't use "OK" because
# concept doesn't quite fit here:
self.btnbox.rejected.connect(self.shutdown_actions)
- self.generate_btn.clicked.connect(self.start_generate)
+ self.generate_btn.clicked.connect(
+ lambda: asyncio.create_task(self.start_generate()))
self.qr_btn.clicked.connect(self.open_qr_code_popup)
# does not trigger cancel_fn callback:
self.close_btn.clicked.connect(self.close)
diff --git a/scripts/receive-payjoin.py b/scripts/receive-payjoin.py
index f0d5d6e..446476a 100755
--- a/scripts/receive-payjoin.py
+++ b/scripts/receive-payjoin.py
@@ -1,9 +1,13 @@
#!/usr/bin/env python3
+import asyncio
from optparse import OptionParser
import sys
+
+import jmclient # install asyncioreactor
from twisted.internet import reactor
+
from jmbase import get_log, jmprint
from jmclient import jm_single, load_program_config, \
WalletService, open_test_wallet_maybe, get_wallet_path, check_regtest, \
@@ -13,7 +17,7 @@ from jmbase.support import EXIT_FAILURE, EXIT_ARGERROR
from jmbitcoin import amount_to_sat
jlog = get_log()
-def receive_payjoin_main():
+async def receive_payjoin_main():
parser = OptionParser(usage='usage: %prog [options] [wallet file] [amount-to-receive]')
add_base_options(parser)
parser.add_option('-P', '--hs-port', action='store', type='int',
@@ -55,7 +59,7 @@ def receive_payjoin_main():
wallet_path = get_wallet_path(wallet_name, None)
max_mix_depth = max([options.mixdepth, options.amtmixdepths - 1])
- wallet = open_test_wallet_maybe(
+ wallet = await open_test_wallet_maybe(
wallet_path, wallet_name, max_mix_depth,
wallet_password_stdin=options.wallet_password_stdin,
gap_limit=options.gaplimit)
@@ -72,6 +76,8 @@ def receive_payjoin_main():
sys.exit(EXIT_ARGERROR)
receiver_manager = JMBIP78ReceiverManager(wallet_service, options.mixdepth,
bip78_amount, options.hsport)
+ await receiver_manager.async_init(wallet_service, options.mixdepth,
+ bip78_amount)
reactor.callWhenRunning(receiver_manager.initiate)
nodaemon = jm_single().config.getint("DAEMON", "no_daemon")
daemon = True if nodaemon == 1 else False
diff --git a/scripts/sendpayment.py b/scripts/sendpayment.py
index d085167..4adc39c 100755
--- a/scripts/sendpayment.py
+++ b/scripts/sendpayment.py
@@ -7,10 +7,13 @@ For notes, see scripts/README.md; in particular, note the use
of "schedules" with the -S flag.
"""
+import asyncio
import sys
-from twisted.internet import reactor
import pprint
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmclient import Taker, load_program_config, get_schedule,\
JMClientProtocolFactory, start_reactor, validate_address, is_burn_destination, \
jm_single, estimate_tx_fee, direct_send, WalletService,\
@@ -18,7 +21,7 @@ from jmclient import Taker, load_program_config, get_schedule,\
get_sendpayment_parser, get_max_cj_fee_values, check_regtest, \
parse_payjoin_setup, send_payjoin, general_custom_change_warning, \
nonwallet_custom_change_warning, sweep_custom_change_warning, \
- EngineError, check_and_start_tor
+ EngineError, check_and_start_tor, FrostWallet, FrostIPCClient
from twisted.python.log import startLogging
from jmbase.support import get_log, jmprint, \
EXIT_FAILURE, EXIT_ARGERROR, cli_prompt_user_yesno
@@ -50,7 +53,7 @@ def pick_order(orders, n): #pragma: no cover
return orders[pickedOrderIndex]
pickedOrderIndex = -1
-def main():
+async def main():
parser = get_sendpayment_parser()
(options, args) = parser.parse_args()
load_program_config(config_path=options.datadir)
@@ -191,17 +194,21 @@ def main():
max_mix_depth = max([mixdepth, options.amtmixdepths - 1])
wallet_path = get_wallet_path(wallet_name, None)
- wallet = open_test_wallet_maybe(
+ wallet = await open_test_wallet_maybe(
wallet_path, wallet_name, max_mix_depth,
wallet_password_stdin=options.wallet_password_stdin,
gap_limit=options.gaplimit)
wallet_service = WalletService(wallet)
if wallet_service.rpc_error:
sys.exit(EXIT_FAILURE)
+ if isinstance(wallet, FrostWallet):
+ ipc_client = FrostIPCClient(wallet)
+ await ipc_client.async_init()
+ wallet.set_ipc_client(ipc_client)
# in this script, we need the wallet synced before
# logic processing for some paths, so do it now:
while not wallet_service.synced:
- wallet_service.sync_wallet(fast=not options.recoversync)
+ await wallet_service.sync_wallet(fast=not options.recoversync)
# the sync call here will now be a no-op:
wallet_service.startService()
@@ -270,13 +277,13 @@ def main():
sys.exit(EXIT_ARGERROR)
if options.makercount == 0 and not bip78url:
- tx = direct_send(wallet_service, mixdepth,
- [(destaddr, amount)],
- options.answeryes,
- with_final_psbt=options.with_psbt,
- optin_rbf=not options.no_rbf,
- custom_change_addr=custom_change,
- change_label=options.changelabel)
+ tx = await direct_send(wallet_service, mixdepth,
+ [(destaddr, amount)],
+ options.answeryes,
+ with_final_psbt=options.with_psbt,
+ optin_rbf=not options.no_rbf,
+ custom_change_addr=custom_change,
+ change_label=options.changelabel)
if options.with_psbt:
log.info("This PSBT is fully signed and can be sent externally for "
"broadcasting:")
@@ -309,11 +316,14 @@ def main():
return False
return True
+ asyncio_loop = asyncio.get_event_loop()
+ taker_finished_future = asyncio_loop.create_future()
+
def taker_finished(res, fromtx=False, waittime=0.0, txdetails=None):
if fromtx == "unconfirmed":
#If final entry, stop *here*, don't wait for confirmation
if taker.schedule_index + 1 == len(taker.schedule):
- reactor.stop()
+ taker_finished_future.set_result(True)
return
if fromtx:
if res:
@@ -334,7 +344,7 @@ def main():
#can only happen with < minimum_makers; see above.
log.info("A transaction failed but there are insufficient "
"honest respondants to continue; giving up.")
- reactor.stop()
+ taker_finished_future.set_result(False)
return
#This is Phase 2; do we have enough to try again?
taker.add_honest_makers(list(set(
@@ -344,7 +354,7 @@ def main():
"POLICY", "minimum_makers"):
log.info("Too few makers responded honestly; "
"giving up this attempt.")
- reactor.stop()
+ taker_finished_future.set_result(False)
return
jmprint("We failed to complete the transaction. The following "
"makers responded honestly: " + str(taker.honest_makers) +\
@@ -364,11 +374,12 @@ def main():
else:
if not res:
log.info("Did not complete successfully, shutting down")
+ taker_finished_future.set_result(False)
#Should usually be unreachable, unless conf received out of order;
#because we should stop on 'unconfirmed' for last (see above)
else:
log.info("All transactions completed correctly")
- reactor.stop()
+ taker_finished_future.set_result(True)
nodaemon = jm_single().config.getint("DAEMON", "no_daemon")
daemon = True if nodaemon == 1 else False
@@ -379,7 +390,8 @@ def main():
manager = parse_payjoin_setup(args[1], wallet_service, options.mixdepth)
reactor.callWhenRunning(send_payjoin, manager)
# JM is default, so must be switched off explicitly in this call:
- start_reactor(dhost, dport, bip78=True, jm_coinjoin=False, daemon=daemon)
+ start_reactor(dhost, dport, bip78=True, jm_coinjoin=False,
+ daemon=daemon, gui=True)
return
else:
@@ -394,8 +406,17 @@ def main():
if jm_single().config.get("BLOCKCHAIN", "network") == "regtest":
startLogging(sys.stdout)
- start_reactor(dhost, dport, clientfactory, daemon=daemon)
+ start_reactor(dhost, dport, clientfactory, daemon=daemon, gui=True)
+ await taker_finished_future
-if __name__ == "__main__":
- main()
+
+async def _main():
+ await main()
jmprint('done', "success")
+ reactor.stop()
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/sendtomany.py b/scripts/sendtomany.py
index 82c283b..8140bfd 100755
--- a/scripts/sendtomany.py
+++ b/scripts/sendtomany.py
@@ -5,8 +5,13 @@ for a Joinmarket user, although of course it may be useful
for other reasons).
"""
+import asyncio
from optparse import OptionParser
import jmbitcoin as btc
+
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import (get_log, jmprint, bintohex, utxostr_to_utxo,
IndentedHelpFormatterWithNL, cli_prompt_user_yesno)
from jmclient import load_program_config, estimate_tx_fee, jm_single,\
@@ -64,7 +69,7 @@ p2wpkh ('bc1') addresses.
utxos - set segwit=False in the POLICY section of
joinmarket.cfg for the former."""
-def main():
+async def main():
parser = OptionParser(
usage=
'usage: %prog [options] utxo destaddr1 destaddr2 ..',
@@ -74,7 +79,7 @@ def main():
'--utxo-address-type',
action='store',
dest='utxo_address_type',
- help=('type of address of coin being spent - one of "p2pkh", "p2wpkh", "p2sh-p2wpkh". '
+ help=('type of address of coin being spent - one of "p2pkh", "p2wpkh", "p2sh-p2wpkh", "p2tr". '
'No other scriptpubkey types (e.g. multisig) are supported. If not set, we default '
'to what is in joinmarket.cfg.'),
default=""
@@ -98,7 +103,9 @@ def main():
if not success:
quit(parser, "Failed to load utxo from string: " + utxo)
if options.utxo_address_type == "":
- if jm_single().config.get("POLICY", "segwit") == "false":
+ if jm_single().config.get("POLICY", "taproot") == "true":
+ utxo_address_type = "p2tr"
+ elif jm_single().config.get("POLICY", "segwit") == "false":
utxo_address_type = "p2pkh"
elif jm_single().config.get("POLICY", "native") == "false":
utxo_address_type = "p2sh-p2wpkh"
@@ -117,6 +124,13 @@ def main():
return
jm_single().bc_interface.pushtx(txsigned.serialize())
-if __name__ == "__main__":
- main()
+
+async def _main():
+ await main()
jmprint('done', "success")
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/snicker/create-snicker-proposal.py b/scripts/snicker/create-snicker-proposal.py
index 89397d9..2585293 100755
--- a/scripts/snicker/create-snicker-proposal.py
+++ b/scripts/snicker/create-snicker-proposal.py
@@ -21,8 +21,13 @@ specified (see help for options), in which case the proposal is
output to stdout in the same string format: base64proposal,hexpubkey.
"""
+import asyncio
import sys
from optparse import OptionParser
+
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import bintohex, jmprint, hextobin, \
EXIT_ARGERROR, EXIT_FAILURE, EXIT_SUCCESS, get_pow
import jmbitcoin as btc
@@ -35,7 +40,7 @@ from jmclient.configure import get_log
log = get_log()
-def main():
+async def main():
parser = OptionParser(
usage=
'usage: %prog [options] walletname hex-tx input-index output-index net-transfer',
@@ -106,7 +111,7 @@ def main():
jm_single().config.set("POLICY", "tx_fees", str(options.txfee))
max_mix_depth = max([options.mixdepth, options.amtmixdepths - 1])
wallet_path = get_wallet_path(wallet_name, None)
- wallet = open_test_wallet_maybe(
+ wallet = await open_test_wallet_maybe(
wallet_path, wallet_name, max_mix_depth,
wallet_password_stdin=options.wallet_password_stdin,
gap_limit=options.gaplimit)
@@ -131,21 +136,21 @@ def main():
fee_est = estimate_tx_fee(2, 3, txtype=wallet_service.get_txtype())
amt_required = originating_tx.vout[output_index].nValue + fee_est
- prop_utxo_dict = wallet_service.select_utxos(options.mixdepth,
+ prop_utxo_dict = await wallet_service.select_utxos(options.mixdepth,
amt_required)
prop_utxos = list(prop_utxo_dict)
prop_utxo_vals = [prop_utxo_dict[x] for x in prop_utxos]
# get the private key for that utxo
priv = wallet_service.get_key_from_addr(
- wallet_service.script_to_addr(prop_utxo_vals[0]['script']))
+ await wallet_service.script_to_addr(prop_utxo_vals[0]['script']))
# construct the arguments for the snicker proposal:
our_input_utxos = [btc.CMutableTxOut(x['value'],
x['script']) for x in prop_utxo_vals]
# destination must be a different mixdepth:
- prop_destn_spk = wallet_service.get_new_script((
+ prop_destn_spk = await wallet_service.get_new_script((
options.mixdepth + 1) % (wallet_service.mixdepth + 1), 1)
- change_spk = wallet_service.get_new_script(options.mixdepth, 1)
+ change_spk = await wallet_service.get_new_script(options.mixdepth, 1)
their_input = (txid1, output_index)
# we also need to extract the pubkey of the chosen input from
# the witness; we vary this depending on our wallet type:
@@ -153,7 +158,7 @@ def main():
if not pubkey:
log.error("Failed to extract pubkey from transaction: {}".format(msg))
sys.exit(EXIT_FAILURE)
- encrypted_proposal = wallet_service.create_snicker_proposal(
+ encrypted_proposal = await wallet_service.create_snicker_proposal(
prop_utxos, their_input,
our_input_utxos,
originating_tx.vout[output_index],
@@ -225,6 +230,13 @@ class SNICKERPostingClient(object):
self.proposals_with_nonce.append(preimage)
return self.proposals_with_nonce
-if __name__ == "__main__":
- main()
+
+async def _main():
+ await main()
jmprint('done', "success")
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/snicker/receive-snicker.py b/scripts/snicker/receive-snicker.py
index 411efbe..14e5523 100755
--- a/scripts/snicker/receive-snicker.py
+++ b/scripts/snicker/receive-snicker.py
@@ -1,7 +1,12 @@
#!/usr/bin/env python3
+import asyncio
from optparse import OptionParser
import sys
+
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import get_log, jmprint
from jmclient import (jm_single, load_program_config, WalletService,
open_test_wallet_maybe, get_wallet_path,
@@ -12,7 +17,7 @@ from jmbase.support import EXIT_ARGERROR
jlog = get_log()
-def receive_snicker_main():
+async def receive_snicker_main():
usage = """ Use this script to receive proposals for SNICKER
coinjoins, parse them and then broadcast coinjoins
that fit your criteria. See the SNICKER section of
@@ -62,7 +67,7 @@ Usage: %prog [options] wallet file [proposal]
wallet_path = get_wallet_path(wallet_name, None)
max_mix_depth = max([options.mixdepth, options.amtmixdepths - 1])
- wallet = open_test_wallet_maybe(
+ wallet = await open_test_wallet_maybe(
wallet_path, wallet_name, max_mix_depth,
wallet_password_stdin=options.wallet_password_stdin,
gap_limit=options.gaplimit)
@@ -77,7 +82,7 @@ Usage: %prog [options] wallet file [proposal]
snicker_r = SNICKERReceiver(wallet_service)
if options.no_upload:
proposal = args[1]
- snicker_r.process_proposals([proposal])
+ await snicker_r.process_proposals([proposal])
return
servers = jm_single().config.get("SNICKER", "servers").split(",")
snicker_pf = SNICKERClientProtocolFactory(snicker_r, servers, oneshot=True)
@@ -86,6 +91,14 @@ Usage: %prog [options] wallet file [proposal]
None, snickerfactory=snicker_pf,
daemon=daemon)
-if __name__ == "__main__":
- receive_snicker_main()
+
+async def _main():
+ await receive_snicker_main()
jmprint('done')
+ reactor.stop()
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/snicker/snicker-finder.py b/scripts/snicker/snicker-finder.py
index 470a54f..1a68458 100755
--- a/scripts/snicker/snicker-finder.py
+++ b/scripts/snicker/snicker-finder.py
@@ -24,8 +24,13 @@ in Bitcoin Core in order to get full transactions, since it
parses the raw blocks.
"""
+import asyncio
import sys
from optparse import OptionParser
+
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import bintohex, EXIT_ARGERROR, jmprint
import jmbitcoin as btc
from jmclient import (jm_single, add_base_options, load_program_config,
@@ -49,7 +54,7 @@ def write_candidate_to_file(ttype, candidate, blocknum, unspents, filename):
"found in the above.\n")
f.write("The unspent indices are: " + " ".join(
(str(u) for u in unspents)) + "\n")
-def main():
+async def main():
parser = OptionParser(
usage=
'usage: %prog [options] startingblock [endingblock]',
@@ -111,6 +116,14 @@ def main():
write_candidate_to_file("Joinmarket coinjoin", t, b,
unspents, options.candidate_file_name)
log.info("Finished processing block: {}".format(b))
-if __name__ == "__main__":
- main()
+
+
+async def _main():
+ await main()
jmprint('done', "success")
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/snicker/snicker-recovery.py b/scripts/snicker/snicker-recovery.py
index c72739c..c6e7422 100755
--- a/scripts/snicker/snicker-recovery.py
+++ b/scripts/snicker/snicker-recovery.py
@@ -24,8 +24,13 @@ keys, so as a reminder, *always* back up either jmdat wallet files,
or at least, the imported keys themselves.)
"""
+import asyncio
import sys
from optparse import OptionParser
+
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import bintohex, EXIT_ARGERROR, jmprint
import jmbitcoin as btc
from jmclient import (add_base_options, load_program_config,
@@ -71,7 +76,7 @@ def get_pubs_and_indices_of_ancestor_inputs(txin, wallet_service, ours):
tx = wallet_service.get_transaction(txin.prevout.hash[::-1])
return get_pubs_and_indices_of_inputs(tx, wallet_service, ours=ours)
-def main():
+async def main():
parser = OptionParser(
usage=
'usage: %prog [options] walletname',
@@ -104,7 +109,7 @@ def main():
wallet_name = args[0]
wallet_path = get_wallet_path(wallet_name, None)
max_mix_depth = max([options.mixdepth, options.amtmixdepths - 1])
- wallet = open_test_wallet_maybe(
+ wallet = await open_test_wallet_maybe(
wallet_path, wallet_name, max_mix_depth,
wallet_password_stdin=options.wallet_password_stdin,
gap_limit=options.gaplimit)
@@ -161,7 +166,7 @@ def main():
for (our_pub, j) in get_pubs_and_indices_of_ancestor_inputs(tx.vin[mi], wallet_service, ours=True):
our_spk = wallet_service.pubkey_to_script(our_pub)
our_priv = wallet_service.get_key_from_addr(
- wallet_service.script_to_addr(our_spk))
+ await wallet_service.script_to_addr(our_spk))
tweak_bytes = btc.ecdh(our_priv[:-1], other_pub)
tweaked_pub = btc.snicker_pubkey_tweak(our_pub, tweak_bytes)
tweaked_spk = wallet_service.pubkey_to_script(tweaked_pub)
@@ -169,7 +174,7 @@ def main():
# TODO wallet.script_to_addr has a dubious assertion, that's why
# we use btc method directly:
address_found = str(btc.CCoinAddress.from_scriptPubKey(btc.CScript(tweaked_spk)))
- #address_found = wallet_service.script_to_addr(tweaked_spk)
+ #address_found = await wallet_service.script_to_addr(tweaked_spk)
jmprint("Found a new SNICKER output belonging to us.")
jmprint("Output address {} in the following transaction:".format(
address_found))
@@ -178,8 +183,9 @@ def main():
# NB for a recovery we accept putting any imported keys all into
# the same mixdepth (0); TODO investigate correcting this, it will
# be a little complicated.
- success, msg = wallet_service.check_tweak_matches_and_import(wallet_service.script_to_addr(our_spk),
- tweak_bytes, tweaked_pub, wallet_service.mixdepth)
+ success, msg = await wallet_service.check_tweak_matches_and_import(
+ await wallet_service.script_to_addr(our_spk),
+ tweak_bytes, tweaked_pub, wallet_service.mixdepth)
if not success:
jmprint("Failed to import SNICKER key: {}".format(msg), "error")
return False
@@ -199,9 +205,16 @@ def main():
"restarting this script.".format(earliest_new_blockheight))
return False
-if __name__ == "__main__":
- res = main()
+
+async def _main():
+ res = await main()
if not res:
jmprint("Script finished, recovery is NOT complete.", level="warning")
else:
jmprint("Script finished, recovery is complete.")
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/snicker/snicker-seed-tx.py b/scripts/snicker/snicker-seed-tx.py
index 2aec47b..99cab6b 100755
--- a/scripts/snicker/snicker-seed-tx.py
+++ b/scripts/snicker/snicker-seed-tx.py
@@ -19,9 +19,14 @@ this is a simulated coinjoin, it may be deducible that it is only really
a *signalling* fake coinjoin, so it is better not to violate the principle.
"""
+import asyncio
import sys
import random
from optparse import OptionParser
+
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import bintohex, jmprint, EXIT_ARGERROR, EXIT_FAILURE
import jmbitcoin as btc
from jmclient import (jm_single, load_program_config, check_regtest,
@@ -32,7 +37,7 @@ from jmclient.configure import get_log
log = get_log()
-def main():
+async def main():
parser = OptionParser(
usage=
'usage: %prog [options] walletname',
@@ -96,7 +101,7 @@ def main():
jm_single().config.set("POLICY", "tx_fees", str(options.txfee))
max_mix_depth = max([options.mixdepth, options.amtmixdepths - 1])
wallet_path = get_wallet_path(wallet_name, None)
- wallet = open_test_wallet_maybe(
+ wallet = await open_test_wallet_maybe(
wallet_path, wallet_name, max_mix_depth,
wallet_password_stdin=options.wallet_password_stdin,
gap_limit=options.gaplimit)
@@ -117,7 +122,8 @@ def main():
# *second* largest utxo as the receiver utxo; this ensures that we
# have enough for the proposer to cover. We consume utxos greedily,
# meaning we'll at least some of the time, be consolidating.
- utxo_dict = wallet_service.get_utxos_by_mixdepth()[options.mixdepth]
+ _utxos = await wallet_service.get_utxos_by_mixdepth()
+ utxo_dict = _utxos[options.mixdepth]
if not len(utxo_dict) >= 2:
log.error("Cannot create fake SNICKER tx without at least two utxos, quitting")
sys.exit(EXIT_ARGERROR)
@@ -158,8 +164,10 @@ def main():
# (not only in trivial output pattern, but also in subset-sum), there
# is little advantage in making it use different output mixdepths, so
# here to prevent fragmentation, everything is kept in the same mixdepth.
- receiver_addr, proposer_addr, change_addr = (wallet_service.script_to_addr(
- wallet_service.get_new_script(options.mixdepth, 1)) for _ in range(3))
+ receiver_addr, proposer_addr, change_addr = (
+ await wallet_service.script_to_addr(
+ await wallet_service.get_new_script(options.mixdepth, 1))
+ for _ in range(3))
# persist index update:
wallet_service.save_wallet()
outputs = btc.construct_snicker_outputs(
@@ -188,7 +196,7 @@ def main():
script = utxo_dict[utxo]['script']
amount = utxo_dict[utxo]['value']
our_inputs[index] = (script, amount)
- success, msg = wallet_service.sign_tx(tx, our_inputs)
+ success, msg = await wallet_service.sign_tx(tx, our_inputs)
if not success:
log.error("Failed to sign transaction: " + msg)
sys.exit(EXIT_FAILURE)
@@ -205,6 +213,13 @@ def main():
log.info("Successfully broadcast fake SNICKER coinjoin: " +\
bintohex(tx.GetTxid()[::-1]))
-if __name__ == "__main__":
- main()
+
+async def _main():
+ await main()
jmprint('done', "success")
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/snicker/snicker-server.py b/scripts/snicker/snicker-server.py
index a132c69..0a9492c 100755
--- a/scripts/snicker/snicker-server.py
+++ b/scripts/snicker/snicker-server.py
@@ -22,7 +22,11 @@ arguments:
"""
+import asyncio
+
+import jmclient # install asyncioreactor
from twisted.internet import reactor
+
from twisted.internet.defer import Deferred
from twisted.web.server import Site
from twisted.web.resource import Resource
@@ -34,6 +38,7 @@ import json
import sqlite3
import threading
from io import BytesIO
+
from jmbase import jmprint, hextobin, verify_pow
from jmclient import process_shutdown, jm_single, load_program_config, check_and_start_tor
from jmclient.configure import get_log
@@ -329,7 +334,7 @@ def snicker_server_start(port, local_port=None, hsdir=None):
ssm = SNICKERServerManager(port, local_port=local_port, hsdir=hsdir)
ssm.start_snicker_server_and_tor()
-if __name__ == "__main__":
+async def _main():
load_program_config(bs="no-blockchain")
check_and_start_tor()
# in testing, we can optionally use ephemeral;
@@ -341,4 +346,9 @@ if __name__ == "__main__":
local_port = int(sys.argv[2])
hsdir = sys.argv[3]
snicker_server_start(port, local_port, hsdir)
+ # reactor.run()
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
reactor.run()
diff --git a/scripts/tumbler.py b/scripts/tumbler.py
index 59cc3ce..4827886 100755
--- a/scripts/tumbler.py
+++ b/scripts/tumbler.py
@@ -1,7 +1,11 @@
#!/usr/bin/env python3
+import asyncio
import sys
+
+import jmclient # install asyncioreactor
from twisted.internet import reactor
+
import os
import pprint
from twisted.python.log import startLogging
@@ -11,7 +15,8 @@ from jmclient import Taker, load_program_config, get_schedule,\
schedule_to_text, estimate_tx_fee, restart_waiter, WalletService,\
get_tumble_log, tumbler_taker_finished_update, check_regtest, \
tumbler_filter_orders_callback, validate_address, get_tumbler_parser, \
- get_max_cj_fee_values, get_total_tumble_amount, ScheduleGenerationErrorNoFunds
+ get_max_cj_fee_values, get_total_tumble_amount, \
+ ScheduleGenerationErrorNoFunds
from jmclient.wallet_utils import DEFAULT_MIXDEPTH
@@ -20,7 +25,7 @@ from jmbase.support import get_log, jmprint, EXIT_SUCCESS, \
log = get_log()
-def main():
+async def main():
(options, args) = get_tumbler_parser().parse_args()
options_org = options
options = vars(options)
@@ -49,7 +54,8 @@ def main():
else:
max_mix_depth = DEFAULT_MIXDEPTH
wallet_path = get_wallet_path(wallet_name, None)
- wallet = open_test_wallet_maybe(wallet_path, wallet_name, max_mix_depth,
+ wallet = await open_test_wallet_maybe(
+ wallet_path, wallet_name, max_mix_depth,
wallet_password_stdin=options_org.wallet_password_stdin)
wallet_service = WalletService(wallet)
if wallet_service.rpc_error:
@@ -197,6 +203,12 @@ def main():
jm_single().config.getint("DAEMON", "daemon_port"),
clientfactory, daemon=daemon)
-if __name__ == "__main__":
- main()
+async def _main():
+ res = await main()
print('done')
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/wallet-tool.py b/scripts/wallet-tool.py
index d71c81d..bd26d49 100755
--- a/scripts/wallet-tool.py
+++ b/scripts/wallet-tool.py
@@ -1,6 +1,41 @@
#!/usr/bin/env python3
+
+import asyncio
+import sys
+
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import jmprint
from jmclient import wallet_tool_main
+
+async def _main():
+ try:
+ res = await wallet_tool_main("wallets")
+ if res:
+ jmprint(res, "success")
+ else:
+ jmprint("Finished", "success")
+ except SystemExit as e:
+ return e.args[0] if e.args else None
+ finally:
+ for task in asyncio.all_tasks():
+ task.cancel()
+ if reactor.running:
+ reactor.stop()
+
+
if __name__ == "__main__":
- jmprint(wallet_tool_main("wallets"), "success")
+ asyncio_loop = asyncio.get_event_loop()
+ main_task = asyncio_loop.create_task(_main())
+ reactor.run()
+ if main_task.done():
+ try:
+ exit_status = main_task.result()
+ if exit_status:
+ sys.exit(exit_status)
+ except asyncio.CancelledError:
+ pass
+ except Exception:
+ raise
diff --git a/scripts/yg-privacyenhanced.py b/scripts/yg-privacyenhanced.py
index 4c4e840..4e8f407 100755
--- a/scripts/yg-privacyenhanced.py
+++ b/scripts/yg-privacyenhanced.py
@@ -1,7 +1,12 @@
#!/usr/bin/env python3
+
+import asyncio
import random
import sys
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import get_log, jmprint, EXIT_ARGERROR
from jmbitcoin import amount_to_str
from jmclient import YieldGeneratorBasic, ygmain, jm_single
@@ -107,6 +112,12 @@ class YieldGeneratorPrivacyEnhanced(YieldGeneratorBasic):
return [order]
-if __name__ == "__main__":
- ygmain(YieldGeneratorPrivacyEnhanced, nickserv_password='')
+async def _main():
+ await ygmain(YieldGeneratorPrivacyEnhanced, nickserv_password='')
jmprint('done', "success")
+
+
+if __name__ == "__main__":
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/scripts/yield-generator-basic.py b/scripts/yield-generator-basic.py
index 299e0a1..fab9fca 100755
--- a/scripts/yield-generator-basic.py
+++ b/scripts/yield-generator-basic.py
@@ -1,11 +1,23 @@
#!/usr/bin/env python3
+import asyncio
+
+import jmclient # install asyncioreactor
+from twisted.internet import reactor
+
from jmbase import jmprint
from jmclient import YieldGeneratorBasic, ygmain
# YIELD GENERATOR SETTINGS ARE NOW IN YOUR joinmarket.cfg CONFIG FILE
# (You can also use command line flags; see --help for this script).
+
+async def _main():
+ await ygmain(YieldGeneratorBasic, nickserv_password='')
+ jmprint("done", "success")
+
+
if __name__ == "__main__":
- ygmain(YieldGeneratorBasic, nickserv_password='')
- jmprint('done', "success")
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.create_task(_main())
+ reactor.run()
diff --git a/src/jmbase/__init__.py b/src/jmbase/__init__.py
index c0a1867..8976f39 100644
--- a/src/jmbase/__init__.py
+++ b/src/jmbase/__init__.py
@@ -9,7 +9,8 @@ from .support import (get_log, chunks, debug_silence, jmprint,
JM_WALLET_NAME_PREFIX, JM_APP_NAME,
IndentedHelpFormatterWithNL, wrapped_urlparse,
bdict_sdict_convert, random_insert, dict_factory,
- cli_prompt_user_value, cli_prompt_user_yesno)
+ cli_prompt_user_value, cli_prompt_user_yesno,
+ async_hexbin, twisted_sys_exit)
from .proof_of_work import get_pow, verify_pow
from .twisted_utils import (stop_reactor, is_hs_uri, get_tor_agent,
get_nontor_agent, JMHiddenService,
diff --git a/src/jmbase/commands.py b/src/jmbase/commands.py
index d75721c..a64309e 100644
--- a/src/jmbase/commands.py
+++ b/src/jmbase/commands.py
@@ -78,6 +78,90 @@ class JMShutdown(JMCommand):
"""
arguments = []
+
+"""Messages used by DKG parties"""
+
+class JMDKGInit(JMCommand):
+ arguments = [
+ (b'hostpubkeyhash', Unicode()),
+ (b'session_id', Unicode()),
+ (b'sig', Unicode()),
+ ]
+
+class JMDKGPMsg1(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'hostpubkeyhash', Unicode()),
+ (b'session_id', Unicode()),
+ (b'sig', Unicode()),
+ (b'pmsg1', Unicode()),
+ ]
+
+class JMDKGPMsg2(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ (b'pmsg2', Unicode()),
+ ]
+
+class JMDKGCMsg1(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ (b'cmsg1', Unicode()),
+ ]
+
+class JMDKGCMsg2(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ (b'cmsg2', Unicode()),
+ (b'ext_recovery', Unicode()),
+ ]
+
+class JMDKGFinalized(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ ]
+
+
+"""Messages used by FROST parties"""
+
+class JMFROSTInit(JMCommand):
+ arguments = [
+ (b'hostpubkeyhash', Unicode()),
+ (b'session_id', Unicode()),
+ (b'sig', Unicode()),
+ ]
+
+class JMFROSTRound1(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'hostpubkeyhash', Unicode()),
+ (b'session_id', Unicode()),
+ (b'sig', Unicode()),
+ (b'pub_nonce', Unicode()),
+ ]
+
+class JMFROSTAgg1(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ (b'nonce_agg', Unicode()),
+ (b'dkg_session_id', Unicode()),
+ (b'ids', Unicode()),
+ (b'msg', Unicode()),
+ ]
+
+class JMFROSTRound2(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ (b'partial_sig', Unicode()),
+ ]
+
+
"""TAKER specific commands
"""
@@ -193,6 +277,92 @@ class JMRequestMsgSigVerify(JMCommand):
(b'max_encoded', Integer()),
(b'hostid', Unicode())]
+
+"""Messages used by DKG parties"""
+
+class JMDKGInitSeen(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'hostpubkeyhash', Unicode()),
+ (b'session_id', Unicode()),
+ (b'sig', Unicode()),
+ ]
+
+class JMDKGPMsg1Seen(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'hostpubkeyhash', Unicode()),
+ (b'session_id', Unicode()),
+ (b'sig', Unicode()),
+ (b'pmsg1', Unicode()),
+ ]
+
+class JMDKGPMsg2Seen(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ (b'pmsg2', Unicode()),
+ ]
+
+class JMDKGFinalizedSeen(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ ]
+
+class JMDKGCMsg1Seen(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ (b'cmsg1', Unicode()),
+ ]
+
+class JMDKGCMsg2Seen(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ (b'cmsg2', Unicode()),
+ (b'ext_recovery', Unicode()),
+ ]
+
+
+"""Messages used by FROST parties"""
+
+class JMFROSTInitSeen(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'hostpubkeyhash', Unicode()),
+ (b'session_id', Unicode()),
+ (b'sig', Unicode()),
+ ]
+
+class JMFROSTRound1Seen(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'hostpubkeyhash', Unicode()),
+ (b'session_id', Unicode()),
+ (b'sig', Unicode()),
+ (b'pub_nonce', Unicode()),
+ ]
+
+class JMFROSTAgg1Seen(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ (b'nonce_agg', Unicode()),
+ (b'dkg_session_id', Unicode()),
+ (b'ids', Unicode()),
+ (b'msg', Unicode()),
+ ]
+
+class JMFROSTRound2Seen(JMCommand):
+ arguments = [
+ (b'nick', Unicode()),
+ (b'session_id', Unicode()),
+ (b'partial_sig', Unicode()),
+ ]
+
+
""" TAKER-specific commands
"""
diff --git a/src/jmbase/support.py b/src/jmbase/support.py
index 4124694..ebcb2f6 100644
--- a/src/jmbase/support.py
+++ b/src/jmbase/support.py
@@ -23,6 +23,14 @@ EXIT_SUCCESS = 0
EXIT_FAILURE = 1
EXIT_ARGERROR = 2
+
+def twisted_sys_exit(status):
+ from twisted.internet import reactor
+ if reactor.running:
+ reactor.stop()
+ sys.exit(status)
+
+
# optparse munges description paragraphs. We sometimes
# don't want that.
class IndentedHelpFormatterWithNL(IndentedHelpFormatter):
@@ -224,7 +232,7 @@ def lookup_appdata_folder(appname):
appname) + '/'
else:
jmprint("Could not find home folder")
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
elif 'win32' in sys.platform or 'win64' in sys.platform:
data_folder = path.join(environ['APPDATA'], appname) + '\\'
@@ -237,7 +245,7 @@ def get_jm_version_str():
def print_jm_version(option, opt_str, value, parser):
print(get_jm_version_str())
- sys.exit(EXIT_SUCCESS)
+ twisted_sys_exit(EXIT_SUCCESS)
# helper functions for conversions of format between over-the-wire JM
# and internal. See details in hexbin() docstring.
@@ -303,6 +311,22 @@ def hexbin(func):
return func_wrapper
+
+def async_hexbin(func):
+ @wraps(func)
+ async def func_wrapper(inst, *args, **kwargs):
+ newargs = []
+ for arg in args:
+ if isinstance(arg, (list, tuple)):
+ newargs.append(listchanger(arg))
+ elif isinstance(arg, dict):
+ newargs.append(dictchanger(arg))
+ else:
+ newargs.append(_convert(arg))
+ return await func(inst, *newargs, **kwargs)
+
+ return func_wrapper
+
def wrapped_urlparse(url):
""" This wrapper is unfortunately necessary as there appears
to be a bug in the urlparse handling of *.onion strings:
diff --git a/src/jmbitcoin/secp256k1_deterministic.py b/src/jmbitcoin/secp256k1_deterministic.py
index eb4fa3d..34545d4 100644
--- a/src/jmbitcoin/secp256k1_deterministic.py
+++ b/src/jmbitcoin/secp256k1_deterministic.py
@@ -97,6 +97,10 @@ def bip32_master_key(seed, vbytes=MAINNET_PRIVATE):
return bip32_serialize((vbytes, 0, b'\x00' * 4, 0, I[32:], I[:32] + b'\x01'
))
+def hostseckey_from_entropy(seed):
+ return hmac.new("Bitcoin seed".encode("utf-8"),
+ seed, hashlib.sha256).digest() + b'\x01'
+
def bip32_extract_key(data):
return bip32_deserialize(data)[-1]
diff --git a/src/jmbitcoin/secp256k1_transaction.py b/src/jmbitcoin/secp256k1_transaction.py
index 43ce4b7..9af9752 100644
--- a/src/jmbitcoin/secp256k1_transaction.py
+++ b/src/jmbitcoin/secp256k1_transaction.py
@@ -219,12 +219,11 @@ def pubkey_to_p2sh_p2wpkh_script(pub: bytes) -> CScript:
return pubkey_to_p2wpkh_script(pub).to_p2sh_scriptPubKey()
def pubkey_to_p2tr_script(pub: bytes) -> CScript:
- """
- Given a pubkey in bytes (compressed), return a CScript
- representing the corresponding pay-to-taproot scriptPubKey.
- """
return P2TRCoinAddress.from_pubkey(pub).to_scriptPubKey()
+def output_pubkey_to_p2tr_script(pub: bytes) -> CScript:
+ return P2TRCoinAddress.from_output_pubkey(pub).to_scriptPubKey()
+
def redeem_script_to_p2wsh_script(redeem_script: Union[bytes, CScript]) -> CScript:
""" Given redeem script of type CScript (or bytes)
returns the corresponding segwit v0 scriptPubKey as
@@ -375,6 +374,43 @@ def sign(
return sig, "signing succeeded"
+
+def add_frost_sig(
+ tx: CMutableTransaction,
+ i: int,
+ pub: bytes,
+ sig: bytes,
+ amount: Optional[int] = None,
+ *,
+ spent_outputs: Optional[List[CTxOut]] = None
+) -> Tuple[Optional[bytes], str]:
+ # script verification flags
+ flags = set([SCRIPT_VERIFY_STRICTENC])
+
+ def return_err(e):
+ return None, "Error in signing: " + repr(e)
+
+ assert isinstance(tx, CMutableTransaction)
+
+ flags.add(SCRIPT_VERIFY_P2SH)
+ flags.add(SCRIPT_VERIFY_WITNESS)
+
+ assert spent_outputs
+ witness = [sig]
+ ctxwitness = CTxInWitness(CScriptWitness(witness))
+ tx.wit.vtxinwit[i] = ctxwitness
+ try:
+ input_scriptPubKey = pubkey_to_p2tr_script(pub)
+ VerifyScript(
+ tx.vin[i].scriptSig, input_scriptPubKey, tx, i,
+ flags=flags, amount=amount,
+ witness=tx.wit.vtxinwit[i].scriptWitness,
+ spent_outputs=spent_outputs)
+ except ValidationError as e:
+ return return_err(e)
+ return sig, "signing succeeded"
+
+
def mktx(ins: List[Tuple[bytes, int]],
outs: List[dict],
version: int = 1,
diff --git a/src/jmclient/__init__.py b/src/jmclient/__init__.py
index f9c9080..7e868a6 100644
--- a/src/jmclient/__init__.py
+++ b/src/jmclient/__init__.py
@@ -1,5 +1,14 @@
+# -*- coding: utf-8 -*-
+import asyncio
import logging
+import sys
+
+if 'twisted.internet.reactor' not in sys.modules:
+ from twisted.internet import asyncioreactor
+ asyncio_loop = asyncio.get_event_loop()
+ asyncio_loop.set_debug(False)
+ asyncioreactor.install(asyncio_loop)
from .support import (calc_cj_fee, choose_sweep_orders, choose_orders,
cheapest_order_choose, weighted_order_choose,
@@ -17,9 +26,10 @@ from .wallet import (Mnemonic, estimate_tx_fee, WalletError, BaseWallet, ImportW
SegwitWallet, SegwitLegacyWallet, FidelityBondMixin,
FidelityBondWatchonlyWallet, SegwitWalletFidelityBonds,
UTXOManager, WALLET_IMPLEMENTATIONS, compute_tx_locktime,
- UnknownAddressForLabel, TaprootWallet)
+ UnknownAddressForLabel, TaprootWallet, FrostWallet)
from .storage import (Argon2Hash, Storage, StorageError, RetryableStorageError,
- StoragePasswordError, VolatileStorage)
+ StoragePasswordError, VolatileStorage,
+ DKGStorage, DKGRecoveryStorage)
from .cryptoengine import (BTCEngine, BTC_P2PKH, BTC_P2SH_P2WPKH, BTC_P2WPKH, EngineError,
TYPE_P2PKH, TYPE_P2SH_P2WPKH, TYPE_P2WPKH, detect_script_type,
is_extended_public_key)
@@ -28,7 +38,7 @@ from .configure import (load_test_config, process_shutdown,
validate_address, is_burn_destination, get_mchannels,
get_blockchain_interface_instance, set_config, is_segwit_mode,
is_taproot_mode, is_native_segwit_mode, JMPluginService, get_interest_rate,
- get_bondless_makers_allowance, check_and_start_tor)
+ get_bondless_makers_allowance, check_and_start_tor, is_frost_mode)
from .blockchaininterface import (BlockchainInterface,
RegtestBitcoinCoreInterface, BitcoinCoreInterface)
from .snicker_receiver import SNICKERError, SNICKERReceiver
@@ -63,8 +73,8 @@ from .wallet_utils import (
wallet_change_passphrase, wallet_signmessage)
from .wallet_service import WalletService
from .maker import Maker
-from .yieldgenerator import YieldGenerator, YieldGeneratorBasic, ygmain, \
- YieldGeneratorService
+from .yieldgenerator import (YieldGenerator, YieldGeneratorBasic, ygmain,
+ YieldGeneratorService)
from .snicker_receiver import SNICKERError, SNICKERReceiver, SNICKERReceiverService
from .payjoin import (parse_payjoin_setup, send_payjoin,
JMBIP78ReceiverManager)
@@ -72,6 +82,8 @@ from .websocketserver import JmwalletdWebSocketServerFactory, \
JmwalletdWebSocketServerProtocol
from .wallet_rpc import JMWalletDaemon
from .bond_calc import get_bond_values
+from .frost_clients import FROSTClient
+from .frost_ipc import FrostIPCClient
# Set default logging handler to avoid "No handler found" warnings.
try:
diff --git a/src/jmclient/blockchaininterface.py b/src/jmclient/blockchaininterface.py
index d050738..fcefe7e 100644
--- a/src/jmclient/blockchaininterface.py
+++ b/src/jmclient/blockchaininterface.py
@@ -11,7 +11,7 @@ from twisted.internet import reactor, task
import jmbitcoin as btc
from jmbase import bintohex, hextobin, stop_reactor
-from jmbase.support import get_log, jmprint, EXIT_FAILURE
+from jmbase.support import get_log, jmprint, EXIT_FAILURE, twisted_sys_exit
from jmclient.configure import jm_single
from jmclient.jsonrpc import JsonRpc, JsonRpcConnectionError, JsonRpcError
@@ -490,7 +490,7 @@ class BitcoinCoreInterface(BlockchainInterface):
restart_cb(fatal_msg)
else:
jmprint(fatal_msg, "important")
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
def import_addresses_if_needed(self, addresses: Set[str], wallet_name: str) -> bool:
if wallet_name in self._rpc('listlabels', []):
@@ -533,12 +533,12 @@ class BitcoinCoreInterface(BlockchainInterface):
restart_cb(fatal_msg)
else:
jmprint(fatal_msg, "important")
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
def import_descriptors_if_needed(self, descriptors: Set[str], wallet_name: str) -> bool:
if wallet_name in self._rpc('listlabels', []):
- imported_descriptors = set(self._rpc('getaddressesbylabel',
- [wallet_name]).keys())
+ list_desc = self._rpc('listdescriptors', []).get('descriptors', [])
+ imported_descriptors = set([x['desc'] for x in list_desc])
else:
imported_descriptors = set()
import_needed = not descriptors.issubset(imported_descriptors)
diff --git a/src/jmclient/client_protocol.py b/src/jmclient/client_protocol.py
index 180932d..f6adf80 100644
--- a/src/jmclient/client_protocol.py
+++ b/src/jmclient/client_protocol.py
@@ -1,4 +1,8 @@
#! /usr/bin/env python
+
+import asyncio
+import base64
+import time
from twisted.internet import protocol, reactor, task
from twisted.internet.error import (ConnectionLost, ConnectionAborted,
ConnectionClosed, ConnectionDone)
@@ -7,19 +11,21 @@ try:
from twisted.internet.ssl import ClientContextFactory
except ImportError:
pass
-from jmbase import commands
+from jmbase import commands, jmprint
import binascii
import json
import hashlib
import os
-import sys
from jmbase import (get_log, EXIT_FAILURE, hextobin, bintohex,
- utxo_to_utxostr, bdict_sdict_convert)
+ utxo_to_utxostr, bdict_sdict_convert, twisted_sys_exit)
+from jmclient.maker import Maker
from jmclient import (jm_single, get_mchannels,
RegtestBitcoinCoreInterface,
- SNICKERReceiver, process_shutdown)
+ SNICKERReceiver, process_shutdown, FrostWallet)
import jmbitcoin as btc
+from .frost_clients import DKGClient
+
# module level variable representing the port
# on which the daemon is running.
# note that this var is only set if we are running
@@ -136,8 +142,11 @@ class BIP78ClientProtocol(BaseClientProtocol):
return {"accepted": True}
@commands.BIP78SenderReceiveProposal.responder
- def on_BIP78_SENDER_RECEIVE_PROPOSAL(self, psbt):
- self.success_callback(psbt, self.manager)
+ async def on_BIP78_SENDER_RECEIVE_PROPOSAL(self, psbt):
+ if asyncio.iscoroutine(self.success_callback):
+ await self.success_callback(psbt, self.manager)
+ else:
+ self.success_callback(psbt, self.manager)
return {"accepted": True}
@commands.BIP78SenderReceiveError.responder
@@ -263,8 +272,8 @@ class SNICKERClientProtocol(BaseClientProtocol):
reactor.callLater(0.0, self.process_proposals, proposals)
return {"accepted": True}
- def process_proposals(self, proposals):
- self.client.process_proposals(proposals)
+ async def process_proposals(self, proposals):
+ await self.client.process_proposals(proposals)
if self.oneshot:
process_shutdown()
@@ -386,6 +395,301 @@ class JMClientProtocol(BaseClientProtocol):
self.defaultCallbacks(d)
return {'accepted': True}
+
+ """DKG specifics
+ """
+ async def dkg_gen(self):
+ jlog.debug(f'Coordinator call dkg_gen')
+ client = self.factory.client
+ md_type_idx = None
+ session_id = None
+ session = None
+
+ while True:
+ if md_type_idx is None:
+ md_type_idx = await client.dkg_gen()
+ if md_type_idx is None:
+ jlog.debug('finished dkg_gen execution')
+ break
+
+ if session_id is None:
+ session_id, _, session = self.dkg_init(*md_type_idx)
+ if session_id is None:
+ jlog.warn('could not get session_id from dkg_init}')
+ await asyncio.sleep(5)
+ continue
+
+ pub = await client.wait_on_dkg_output(session_id)
+ if not pub:
+ session_id = None
+ session = None
+ continue
+
+ if session.dkg_output:
+ md_type_idx = None
+ session_id = None
+ session = None
+ client.dkg_gen_list.pop(0)
+ continue
+
+ def dkg_init(self, mixdepth, address_type, index):
+ jlog.debug(f'Coordinator call dkg_init '
+ f'({mixdepth}, {address_type}, {index})')
+ client = self.factory.client
+ hostpubkeyhash, session_id, sig = client.dkg_init(mixdepth,
+ address_type, index)
+ coordinator = client.dkg_coordinators.get(session_id)
+ session = client.dkg_sessions.get(session_id)
+ if session_id and session and coordinator:
+ d = self.callRemote(commands.JMDKGInit,
+ hostpubkeyhash=hostpubkeyhash,
+ session_id=bintohex(session_id),
+ sig=sig)
+ self.defaultCallbacks(d)
+ session.dkg_init_sec = time.time()
+ return session_id, coordinator, session
+ return None, None, None
+
+ @commands.JMDKGInitSeen.responder
+ def on_JM_DKG_INIT_SEEN(self, nick, hostpubkeyhash, session_id, sig):
+ wallet = self.client.wallet_service.wallet
+ if not isinstance(wallet, FrostWallet) or wallet._dkg is None:
+ return {'accepted': True}
+
+ client = self.factory.client
+ session_id = hextobin(session_id)
+ nick, hostpubkeyhash, session_id, sig, pmsg1 = client.on_dkg_init(
+ nick, hostpubkeyhash, session_id, sig)
+ if pmsg1:
+ d = self.callRemote(commands.JMDKGPMsg1,
+ nick=nick, hostpubkeyhash=hostpubkeyhash,
+ session_id=session_id, sig=sig,
+ pmsg1=base64.b64encode(pmsg1).decode('ascii'))
+ self.defaultCallbacks(d)
+ return {'accepted': True}
+
+ @commands.JMDKGPMsg1Seen.responder
+ def on_JM_DKG_PMSG1_SEEN(self, nick, hostpubkeyhash,
+ session_id, sig, pmsg1):
+ wallet = self.client.wallet_service.wallet
+ if not isinstance(wallet, FrostWallet) or wallet._dkg is None:
+ return {'accepted': True}
+
+ client = self.factory.client
+ bin_session_id = hextobin(session_id)
+ pmsg1 = client.deserialize_pmsg1(base64.b64decode(pmsg1))
+ ready_nicks, cmsg1 = client.on_dkg_pmsg1(nick, hostpubkeyhash,
+ bin_session_id, sig, pmsg1)
+ if ready_nicks and cmsg1:
+ for nick in ready_nicks:
+ self.dkg_cmsg1(nick, session_id, cmsg1)
+ return {'accepted': True}
+
+ def dkg_cmsg1(self, nick, session_id, cmsg1):
+ d = self.callRemote(commands.JMDKGCMsg1,
+ nick=nick, session_id=session_id,
+ cmsg1=base64.b64encode(cmsg1).decode('ascii'))
+ self.defaultCallbacks(d)
+
+ @commands.JMDKGPMsg2Seen.responder
+ def on_JM_DKG_PMSG2_SEEN(self, nick, session_id, pmsg2):
+ wallet = self.client.wallet_service.wallet
+ if not isinstance(wallet, FrostWallet) or wallet._dkg is None:
+ return {'accepted': True}
+
+ client = self.factory.client
+ bin_session_id = hextobin(session_id)
+ pmsg2 = client.deserialize_pmsg2(base64.b64decode(pmsg2))
+ ready_nicks, cmsg2, ext_recovery = client.on_dkg_pmsg2(
+ nick, bin_session_id, pmsg2)
+ if ready_nicks and cmsg2 and ext_recovery:
+ for nick in ready_nicks:
+ self.dkg_cmsg2(nick, session_id, cmsg2, ext_recovery)
+ return {'accepted': True}
+
+ def dkg_cmsg2(self, nick, session_id, cmsg2, ext_recovery):
+ d = self.callRemote(commands.JMDKGCMsg2,
+ nick=nick, session_id=session_id,
+ cmsg2=base64.b64encode(cmsg2).decode('ascii'),
+ ext_recovery=ext_recovery.decode('ascii'))
+ self.defaultCallbacks(d)
+
+ @commands.JMDKGFinalizedSeen.responder
+ def on_JM_DKG_FINALIZED_SEEN(self, nick, session_id):
+ wallet = self.client.wallet_service.wallet
+ if not isinstance(wallet, FrostWallet) or wallet._dkg is None:
+ return {'accepted': True}
+
+ client = self.factory.client
+ bin_session_id = hextobin(session_id)
+ jlog.debug(f'Coordinator get dkgfinalized')
+ client.on_dkg_finalized(nick, bin_session_id)
+ return {'accepted': True}
+
+ @commands.JMDKGCMsg1Seen.responder
+ def on_JM_DKG_CMSG1_SEEN(self, nick, session_id, cmsg1):
+ wallet = self.client.wallet_service.wallet
+ if not isinstance(wallet, FrostWallet) or wallet._dkg is None:
+ return {'accepted': True}
+
+ client = self.factory.client
+ bin_session_id = hextobin(session_id)
+ session = client.dkg_sessions.get(bin_session_id)
+ if not session:
+ jlog.error(f'on_JM_DKG_CMSG1_SEEN: session {session_id} not found')
+ return {'accepted': True}
+ if session and session.coord_nick == nick:
+ cmsg1 = client.deserialize_cmsg1(base64.b64decode(cmsg1))
+ pmsg2 = client.party_step2(bin_session_id, cmsg1)
+ if pmsg2:
+ pmsg2b64 = base64.b64encode(pmsg2).decode('ascii')
+ d = self.callRemote(commands.JMDKGPMsg2,
+ nick=nick, session_id=session_id,
+ pmsg2=pmsg2b64)
+ self.defaultCallbacks(d)
+ else:
+ jlog.error(f'on_JM_DKG_CMSG1_SEEN: not coordinator nick {nick}')
+ return {'accepted': True}
+
+ @commands.JMDKGCMsg2Seen.responder
+ def on_JM_DKG_CMSG2_SEEN(self, nick, session_id, cmsg2, ext_recovery):
+ wallet = self.client.wallet_service.wallet
+ if not isinstance(wallet, FrostWallet) or wallet._dkg is None:
+ return {'accepted': True}
+
+ client = self.factory.client
+ bin_session_id = hextobin(session_id)
+ session = client.dkg_sessions.get(bin_session_id)
+ if not session:
+ jlog.error(f'on_JM_DKG_CMSG2_SEEN: session {session_id} not found')
+ return {'accepted': True}
+ if session and session.coord_nick == nick:
+ cmsg2 = client.deserialize_cmsg2(base64.b64decode(cmsg2))
+ finalized = client.finalize(bin_session_id, cmsg2,
+ ext_recovery.encode('ascii'))
+ if finalized:
+ d = self.callRemote(commands.JMDKGFinalized,
+ nick=nick, session_id=session_id)
+ self.defaultCallbacks(d)
+ else:
+ jlog.error(f'on_JM_DKG_CMSG2_SEEN: not coordinator nick {nick}')
+ return {'accepted': True}
+
+ """FROST specifics
+ """
+ def frost_init(self, dkg_session_id, msg_bytes):
+ jlog.debug(f'Coordinator call frost_init')
+ client = self.factory.client
+ hostpubkeyhash, session_id, sig = client.frost_init(
+ dkg_session_id, msg_bytes)
+ coordinator = client.frost_coordinators.get(session_id)
+ session = client.frost_sessions.get(session_id)
+ if session_id and session and coordinator:
+ d = self.callRemote(commands.JMFROSTInit,
+ hostpubkeyhash=hostpubkeyhash,
+ session_id=bintohex(session_id),
+ sig=sig)
+ self.defaultCallbacks(d)
+ coordinator.frost_init_sec = time.time()
+ return session_id, coordinator, session
+ return None, None, None
+
+ @commands.JMFROSTInitSeen.responder
+ def on_JM_FROST_INIT_SEEN(self, nick, hostpubkeyhash, session_id, sig):
+ wallet = self.client.wallet_service.wallet
+ if not isinstance(wallet, FrostWallet) or wallet._dkg is None:
+ return {'accepted': True}
+
+ client = self.factory.client
+ session_id = hextobin(session_id)
+ nick, hostpubkeyhash, session_id, sig, pub_nonce = \
+ client.on_frost_init(nick, hostpubkeyhash, session_id, sig)
+ if pub_nonce:
+ pub_nonce_b64 = base64.b64encode(pub_nonce).decode('ascii')
+ d = self.callRemote(commands.JMFROSTRound1,
+ nick=nick, hostpubkeyhash=hostpubkeyhash,
+ session_id=session_id, sig=sig,
+ pub_nonce=pub_nonce_b64)
+ self.defaultCallbacks(d)
+ return {'accepted': True}
+
+ @commands.JMFROSTRound1Seen.responder
+ def on_JM_FROST_ROUND1_SEEN(self, nick, hostpubkeyhash,
+ session_id, sig, pub_nonce):
+ wallet = self.client.wallet_service.wallet
+ if not isinstance(wallet, FrostWallet) or wallet._dkg is None:
+ return {'accepted': True}
+
+ client = self.factory.client
+ bin_session_id = hextobin(session_id)
+ pub_nonce = base64.b64decode(pub_nonce)
+ ready_nicks, nonce_agg, dkg_session_id, ids, msg = \
+ client.on_frost_round1(nick, hostpubkeyhash, bin_session_id,
+ sig, pub_nonce)
+ if ready_nicks and nonce_agg:
+ for nick in ready_nicks:
+ self.frost_agg1(nick, session_id, nonce_agg,
+ dkg_session_id, ids, msg)
+ return {'accepted': True}
+
+ def frost_agg1(self, nick, session_id,
+ nonce_agg, dkg_session_id, ids, msg):
+ nonce_agg = base64.b64encode(nonce_agg).decode('ascii')
+ dkg_session_id = base64.b64encode(dkg_session_id).decode('ascii')
+ ids = ','.join([str(i)for i in ids])
+ msg = base64.b64encode(msg).decode('ascii')
+ d = self.callRemote(commands.JMFROSTAgg1,
+ nick=nick, session_id=session_id,
+ nonce_agg=nonce_agg, dkg_session_id=dkg_session_id,
+ ids=ids, msg=msg)
+ self.defaultCallbacks(d)
+
+ @commands.JMFROSTAgg1Seen.responder
+ def on_JM_FROST_AGG1_SEEN(self, nick, session_id,
+ nonce_agg, dkg_session_id, ids, msg):
+ wallet = self.client.wallet_service.wallet
+ if not isinstance(wallet, FrostWallet) or wallet._dkg is None:
+ return {'accepted': True}
+
+ client = self.factory.client
+ bin_session_id = hextobin(session_id)
+ session = client.frost_sessions.get(bin_session_id)
+ if not session:
+ jlog.error(f'on_JM_DKG_AGG1_SEEN: session {session_id} not found')
+ return {'accepted': True}
+ if session and session.coord_nick == nick:
+ nonce_agg = base64.b64decode(nonce_agg)
+ dkg_session_id = base64.b64decode(dkg_session_id)
+ ids = [int(i) for i in ids.split(',')]
+ msg = base64.b64decode(msg)
+
+ partial_sig = client.frost_round2(
+ bin_session_id, nonce_agg, dkg_session_id, ids, msg)
+ if partial_sig:
+ partial_sig = base64.b64encode(partial_sig).decode('ascii')
+ d = self.callRemote(commands.JMFROSTRound2,
+ nick=nick, session_id=session_id,
+ partial_sig=partial_sig)
+ self.defaultCallbacks(d)
+ else:
+ jlog.error(f'on_JM_DKG_AGG1_SEEN: not coordinator nick {nick}')
+ return {'accepted': True}
+
+ @commands.JMFROSTRound2Seen.responder
+ def on_JM_FROST_ROUND2_SEEN(self, nick, session_id, partial_sig):
+ wallet = self.client.wallet_service.wallet
+ if not isinstance(wallet, FrostWallet) or wallet._dkg is None:
+ return {'accepted': True}
+
+ client = self.factory.client
+ bin_session_id = hextobin(session_id)
+ partial_sig = base64.b64decode(partial_sig)
+ sig = client.on_frost_round2(nick, bin_session_id, partial_sig)
+ if sig:
+ jlog.debug(f'Successfully get signature {sig.hex()[:8]}...')
+ return {'accepted': True}
+
+
class JMMakerClientProtocol(JMClientProtocol):
def __init__(self, factory, maker, nick_priv=None):
self.factory = factory
@@ -395,11 +699,14 @@ class JMMakerClientProtocol(JMClientProtocol):
@commands.JMUp.responder
def on_JM_UP(self):
- #wait until ready locally to submit offers (can be delayed
- #if wallet sync is slow).
- self.offers_ready_loop_counter = 0
- self.offers_ready_loop = task.LoopingCall(self.submitOffers)
- self.offers_ready_loop.start(2.0)
+ if isinstance(self.client, DKGClient):
+ self.client.on_jm_up()
+ if isinstance(self.client, Maker):
+ # wait until ready locally to submit offers (can be delayed
+ # if wallet sync is slow).
+ self.offers_ready_loop_counter = 0
+ self.offers_ready_loop = task.LoopingCall(self.submitOffers)
+ self.offers_ready_loop.start(2.0)
return {'accepted': True}
def submitOffers(self):
@@ -461,10 +768,10 @@ class JMMakerClientProtocol(JMClientProtocol):
return {"accepted": True}
@commands.JMAuthReceived.responder
- def on_JM_AUTH_RECEIVED(self, nick, offer, commitment, revelation, amount,
+ async def on_JM_AUTH_RECEIVED(self, nick, offer, commitment, revelation, amount,
kphex):
- retval = self.client.on_auth_received(nick, offer,
- commitment, revelation, amount, kphex)
+ retval = await self.client.on_auth_received(
+ nick, offer, commitment, revelation, amount, kphex)
if not retval[0]:
jlog.info("Maker refuses to continue on receiving auth.")
else:
@@ -488,8 +795,8 @@ class JMMakerClientProtocol(JMClientProtocol):
return {"accepted": True}
@commands.JMTXReceived.responder
- def on_JM_TX_RECEIVED(self, nick, tx, offer):
- retval = self.client.on_tx_received(nick, tx, offer)
+ async def on_JM_TX_RECEIVED(self, nick, tx, offer):
+ retval = await self.client.on_tx_received(nick, tx, offer)
if not retval[0]:
jlog.info("Maker refuses to continue on receipt of tx")
else:
@@ -621,7 +928,7 @@ class JMTakerClientProtocol(JMClientProtocol):
blacklist_location=jm_single().commitment_list_location)
self.defaultCallbacks(d)
- def stallMonitor(self, schedule_index):
+ async def stallMonitor(self, schedule_index):
"""Diagnoses whether long wait is due to any kind of failure;
if so, calls the taker on_finished_callback with a failure
flag so that the transaction can be re-tried or abandoned, as desired.
@@ -645,7 +952,10 @@ class JMTakerClientProtocol(JMClientProtocol):
if not self.client.txid:
#txid is set on pushing; if it's not there, we have failed.
jlog.info("Stall detected. Retrying transaction if possible ...")
- self.client.on_finished_callback(False, True, 0.0)
+ finished_cb_res = self.client.on_finished_callback(
+ False, True, 0.0)
+ if asyncio.iscoroutine(self.client.on_finished_callback):
+ await finished_cb_res
else:
#This shouldn't really happen; if the tx confirmed,
#the finished callback should already be called.
@@ -670,7 +980,7 @@ class JMTakerClientProtocol(JMClientProtocol):
return {'accepted': True}
@commands.JMFillResponse.responder
- def on_JM_FILL_RESPONSE(self, success, ioauth_data):
+ async def on_JM_FILL_RESPONSE(self, success, ioauth_data):
"""Receives the entire set of phase 1 data (principally utxos)
from the counterparties and passes through to the Taker for
tx construction. If there were sufficient makers, data is passed
@@ -689,14 +999,17 @@ class JMTakerClientProtocol(JMClientProtocol):
return {'accepted': True}
else:
jlog.info("Makers responded with: " + str(ioauth_data))
- retval = self.client.receive_utxos(ioauth_data)
+ retval = await self.client.receive_utxos(ioauth_data)
if not retval[0]:
jlog.info("Taker is not continuing, phase 2 abandoned.")
jlog.info("Reason: " + str(retval[1]))
if len(self.client.schedule) == 1:
# see comment for the same invocation in on_JM_OFFERS;
# the logic here is the same.
- self.client.on_finished_callback(False, False, 0.0)
+ finished_cb_res = self.client.on_finished_callback(
+ False, False, 0.0)
+ if asyncio.iscoroutine(self.client.on_finished_callback):
+ await finished_cb_res
return {'accepted': False}
else:
nick_list, tx = retval[1:]
@@ -704,12 +1017,13 @@ class JMTakerClientProtocol(JMClientProtocol):
return {'accepted': True}
@commands.JMOffers.responder
- def on_JM_OFFERS(self, orderbook, fidelitybonds):
+ async def on_JM_OFFERS(self, orderbook, fidelitybonds):
self.orderbook = json.loads(orderbook)
fidelity_bonds_list = json.loads(fidelitybonds)
#Removed for now, as judged too large, even for DEBUG:
#jlog.debug("Got the orderbook: " + str(self.orderbook))
- retval = self.client.initialize(self.orderbook, fidelity_bonds_list)
+ retval = await self.client.initialize(
+ self.orderbook, fidelity_bonds_list)
#format of retval is:
#True, self.cjamount, commitment, revelation, self.filtered_orderbook)
if not retval[0]:
@@ -718,12 +1032,18 @@ class JMTakerClientProtocol(JMClientProtocol):
#In single sendpayments, allow immediate quit.
#This could be an optional feature also for multi-entry schedules,
#but is not the functionality desired in general (tumbler).
- self.client.on_finished_callback(False, False, 0.0)
+ finished_cb_res = self.client.on_finished_callback(
+ False, False, 0.0)
+ if asyncio.iscoroutine(self.client.on_finished_callback):
+ await finished_cb_res
return {'accepted': True}
elif retval[0] == "commitment-failure":
#This case occurs if we cannot find any utxos for reasons
#other than age, which is a permanent failure
- self.client.on_finished_callback(False, False, 0.0)
+ finished_cb_res = self.client.on_finished_callback(
+ False, False, 0.0)
+ if asyncio.iscoroutine(self.client.on_finished_callback):
+ await finished_cb_res
return {'accepted': True}
amt, cmt, rev, foffers = retval[1:]
d = self.callRemote(commands.JMFill,
@@ -735,8 +1055,8 @@ class JMTakerClientProtocol(JMClientProtocol):
return {'accepted': True}
@commands.JMSigReceived.responder
- def on_JM_SIG_RECEIVED(self, nick, sig):
- retval = self.client.on_sig(nick, sig)
+ async def on_JM_SIG_RECEIVED(self, nick, sig):
+ retval = await self.client.on_sig(nick, sig)
if retval:
nick_to_use, tx = retval
self.push_tx(nick_to_use, tx)
@@ -857,7 +1177,7 @@ def start_reactor(host, port, factory=None, snickerfactory=None,
if p[0] >= (orgp + 100):
jlog.error("Tried 100 ports but cannot "
"listen on any of them. Quitting.")
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
p[0] += 1
return (p[0], serverconn)
diff --git a/src/jmclient/commitment_utils.py b/src/jmclient/commitment_utils.py
index d2e38dc..fcfd10d 100644
--- a/src/jmclient/commitment_utils.py
+++ b/src/jmclient/commitment_utils.py
@@ -1,12 +1,13 @@
import sys
-from jmbase import jmprint, utxostr_to_utxo, utxo_to_utxostr, EXIT_FAILURE
+from jmbase import (jmprint, utxostr_to_utxo, utxo_to_utxostr, EXIT_FAILURE,
+ twisted_sys_exit)
from jmclient import jm_single, BTCEngine, BTC_P2PKH, BTC_P2SH_P2WPKH, BTC_P2WPKH
def quit(parser, errmsg): #pragma: no cover
parser.error(errmsg)
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
def get_utxo_info(upriv, utxo_binary=False):
"""Verify that the input string parses correctly as (utxo, priv)
@@ -52,7 +53,7 @@ def validate_utxo_data(utxo_datas, retrieve=False, utxo_address_type="p2wpkh"):
success, utxostr = utxo_to_utxostr(u)
if not success:
jmprint("Invalid utxo format: " + str(u), "error")
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
jmprint('validating this utxo: ' + utxostr, "info")
# as noted in `ImportWalletMixin` code comments, there is not
# yet a functional auto-detection of key type from WIF, hence
diff --git a/src/jmclient/configure.py b/src/jmclient/configure.py
index 7ef0fb6..18370ef 100644
--- a/src/jmclient/configure.py
+++ b/src/jmclient/configure.py
@@ -13,7 +13,8 @@ from typing import Any, List, Optional, Tuple
import jmbitcoin as btc
from jmbase.support import (get_log, joinmarket_alert, core_alert, debug_silence,
set_logging_level, jmprint, set_logging_color,
- JM_APP_NAME, lookup_appdata_folder, EXIT_FAILURE)
+ JM_APP_NAME, lookup_appdata_folder, EXIT_FAILURE,
+ twisted_sys_exit)
from jmclient.jsonrpc import JsonRpc
from jmclient.podle import set_commitment_file
@@ -223,6 +224,12 @@ confirm_timeout_hours = 6
# Only set to false for old wallets, Joinmarket is now segwit only.
segwit = true
+# Use Taproot P2TR SegWit wallet
+#taproot = true
+
+# Use FROST P2TR SegWit wallet
+#frost = true
+
# Use native segwit (bech32) wallet. If set to false, p2sh-p2wkh
# will be used when generating the addresses for this wallet.
# Notes: 1. The default joinmarket pit is native segwit.
@@ -701,7 +708,7 @@ def load_program_config(config_path: str = "", bs: Optional[str] = None,
except UnicodeDecodeError:
jmprint("Error loading `joinmarket.cfg`, invalid file format.",
"info")
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
# Hack required for bitcoin-rpc-no-history and probably others
# (historicaly electrum); must be able to enforce a different blockchain
@@ -714,7 +721,7 @@ def load_program_config(config_path: str = "", bs: Optional[str] = None,
configfile.write(defaultconfig)
jmprint("Created a new `joinmarket.cfg`. Please review and adopt the "
"settings and restart joinmarket.", "info")
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
loglevel = global_singleton.config.get("LOGGING", "console_log_level")
try:
@@ -953,6 +960,12 @@ def update_persist_config(section: str, name: str, value: Any) -> bool:
f.writelines([x.encode("utf-8") for x in newlines])
return True
+def is_frost_mode() -> bool:
+ c = jm_single().config
+ if not c.has_option('POLICY', 'frost'):
+ return False
+ return c.get('POLICY', 'frost') != 'false'
+
def is_taproot_mode() -> bool:
c = jm_single().config
if not c.has_option('POLICY', 'taproot'):
diff --git a/src/jmclient/cryptoengine.py b/src/jmclient/cryptoengine.py
index d90f2a6..5ce191c 100644
--- a/src/jmclient/cryptoengine.py
+++ b/src/jmclient/cryptoengine.py
@@ -2,6 +2,8 @@
from collections import OrderedDict
import struct
+from bitcointx.core.script import SignatureHashSchnorr
+
import jmbitcoin as btc
from jmbase import bintohex
from .configure import get_network, jm_single
@@ -207,6 +209,10 @@ class BTCEngine(object):
def pubkey_to_script(cls, pubkey):
raise NotImplementedError()
+ @classmethod
+ def output_pubkey_to_script(cls, pubkey):
+ raise NotImplementedError()
+
@classmethod
def privkey_to_address(cls, privkey):
script = cls.key_to_script(privkey)
@@ -237,7 +243,17 @@ class BTCEngine(object):
return script == pscript
@classmethod
- def sign_transaction(cls, tx, index, privkey, amount):
+ def output_pubkey_has_script(cls, pubkey, script):
+ stype = detect_script_type(script)
+ assert stype in ENGINES
+ engine = ENGINES[stype]
+ if engine is None:
+ raise EngineError
+ pscript = engine.output_pubkey_to_script(pubkey)
+ return script == pscript
+
+ @classmethod
+ async def sign_transaction(cls, tx, index, privkey, amount):
raise NotImplementedError()
@staticmethod
@@ -282,7 +298,7 @@ class BTC_P2PKH(BTCEngine):
raise EngineError("Script code does not apply to legacy wallets")
@classmethod
- def sign_transaction(cls, tx, index, privkey, *args, **kwargs):
+ async def sign_transaction(cls, tx, index, privkey, *args, **kwargs):
hashcode = kwargs.get('hashcode') or btc.SIGHASH_ALL
return btc.sign(tx, index, privkey,
hashcode=hashcode, amount=None, native=False)
@@ -309,7 +325,7 @@ class BTC_P2SH_P2WPKH(BTCEngine):
return btc.pubkey_to_p2pkh_script(pubkey, require_compressed=True)
@classmethod
- def sign_transaction(cls, tx, index, privkey, amount,
+ async def sign_transaction(cls, tx, index, privkey, amount,
hashcode=btc.SIGHASH_ALL, **kwargs):
assert amount is not None
a, b = btc.sign(tx, index, privkey,
@@ -346,7 +362,7 @@ class BTC_P2WPKH(BTCEngine):
return btc.pubkey_to_p2pkh_script(pubkey, require_compressed=True)
@classmethod
- def sign_transaction(cls, tx, index, privkey, amount,
+ async def sign_transaction(cls, tx, index, privkey, amount,
hashcode=btc.SIGHASH_ALL, **kwargs):
assert amount is not None
return btc.sign(tx, index, privkey,
@@ -395,7 +411,7 @@ class BTC_Timelocked_P2WSH(BTCEngine):
return btc.bin_to_b58check(priv, cls.WIF_PREFIX)
@classmethod
- def sign_transaction(cls, tx, index, privkey_locktime, amount,
+ async def sign_transaction(cls, tx, index, privkey_locktime, amount,
hashcode=btc.SIGHASH_ALL, **kwargs):
assert amount is not None
priv, locktime = privkey_locktime
@@ -428,7 +444,7 @@ class BTC_Watchonly_Timelocked_P2WSH(BTC_Timelocked_P2WSH):
return ""
@classmethod
- def sign_transaction(cls, tx, index, privkey, amount,
+ async def sign_transaction(cls, tx, index, privkey, amount,
hashcode=btc.SIGHASH_ALL, **kwargs):
raise RuntimeError("Cannot spend from watch-only wallets")
@@ -455,7 +471,7 @@ class BTC_Watchonly_P2WPKH(BTC_P2WPKH):
master_key, BTC_Watchonly_Timelocked_P2WSH.get_watchonly_path(path))
@classmethod
- def sign_transaction(cls, tx, index, privkey, amount,
+ async def sign_transaction(cls, tx, index, privkey, amount,
hashcode=btc.SIGHASH_ALL, **kwargs):
raise RuntimeError("Cannot spend from watch-only wallets")
@@ -470,21 +486,42 @@ class BTC_P2TR(BTCEngine):
def pubkey_to_script(cls, pubkey):
return btc.pubkey_to_p2tr_script(pubkey)
+ @classmethod
+ def output_pubkey_to_script(cls, pubkey):
+ return btc.output_pubkey_to_p2tr_script(pubkey)
+
@classmethod
def pubkey_to_script_code(cls, pubkey):
raise NotImplementedError()
@classmethod
- def sign_transaction(cls, tx, index, privkey, amount,
+ async def sign_transaction(cls, tx, index, privkey, amount,
hashcode=btc.SIGHASH_ALL, **kwargs):
assert amount is not None
- assert 'spent_outputs' in kwargs
spent_outputs = kwargs['spent_outputs']
return btc.sign(tx, index, privkey,
hashcode=hashcode, amount=amount, native="p2tr",
spent_outputs=spent_outputs)
+class BTC_P2TR_FROST(BTC_P2TR):
+
+ @classmethod
+ async def sign_transaction(cls, tx, i, path, amount,
+ hashcode=btc.SIGHASH_ALL, wallet=None,
+ **kwargs):
+ spent_outputs = kwargs['spent_outputs']
+ sighash = SignatureHashSchnorr(tx, i, spent_outputs)
+ mixdepth, address_type, index = wallet.get_details(path)
+ sig, pubkey, tweaked_pubkey = await wallet.ipc_client.frost_sign(
+ mixdepth, address_type, index, sighash)
+ if not sig:
+ return None, "FROST signing failed"
+ sig, msg = btc.add_frost_sig(tx, i, pubkey, sig, amount,
+ spent_outputs=spent_outputs)
+ return sig, msg
+
+
ENGINES = {
TYPE_P2PKH: BTC_P2PKH,
TYPE_P2SH_P2WPKH: BTC_P2SH_P2WPKH,
@@ -494,4 +531,5 @@ ENGINES = {
TYPE_WATCHONLY_P2WPKH: BTC_Watchonly_P2WPKH,
TYPE_SEGWIT_WALLET_FIDELITY_BONDS: BTC_P2WPKH,
TYPE_P2TR: BTC_P2TR,
+ TYPE_P2TR_FROST: BTC_P2TR_FROST,
}
diff --git a/src/jmclient/frost_clients.py b/src/jmclient/frost_clients.py
new file mode 100644
index 0000000..81f8de7
--- /dev/null
+++ b/src/jmclient/frost_clients.py
@@ -0,0 +1,1030 @@
+# -*- coding: utf-8 -*-
+
+import asyncio
+import os
+import time
+from hashlib import sha256
+
+from bitcointx.core.key import XOnlyPubKey
+
+import jmbitcoin as btc
+from jmbase import hextobin, get_log
+from jmbitcoin import CCoinKey
+from jmclient.configure import jm_single
+from jmfrost.chilldkg_ref.chilldkg import (
+ params_id,
+ hostpubkey_gen,
+ participant_step1,
+ participant_step2,
+ participant_finalize,
+ participant_investigate,
+ coordinator_step1,
+ coordinator_finalize,
+ coordinator_investigate,
+ SessionParams,
+ DKGOutput,
+ RecoveryData,
+ FaultyParticipantOrCoordinatorError,
+ UnknownFaultyParticipantOrCoordinatorError,
+ ParticipantMsg1,
+ ParticipantMsg2,
+ CoordinatorMsg1,
+ CoordinatorMsg2,
+)
+from jmfrost.chilldkg_ref import encpedpop
+from jmfrost.chilldkg_ref import simplpedpop
+from jmfrost.chilldkg_ref import vss
+from jmfrost.secp256k1proto import secp256k1
+from jmfrost.frost_ref import reference as frost
+from jmfrost.frost_ref.utils.bip340 import schnorr_verify
+
+
+jlog = get_log()
+
+
+def calc_tweak(pubshares, ids_bytes, h=b''):
+ pubkey = frost.derive_group_pubkey(pubshares, ids_bytes)
+ return frost.tagged_hash("TapTweak", pubkey[1:] + h)
+
+
+def chilldkg_hexlify(data):
+ if isinstance(data, bytes):
+ return data.hex()
+ if isinstance(data, dict):
+ return {k: chilldkg_hexlify(v) for k, v in data.items()}
+ if hasattr(data, "_asdict"): # NamedTuple
+ return chilldkg_hexlify(data._asdict())
+ if isinstance(data, list):
+ return [chilldkg_hexlify(v) for v in data]
+ return data
+
+
+def decrypt_ext_recovery(privkey, enc_ext_recovery_base64):
+ return btc.ecies_decrypt(privkey, enc_ext_recovery_base64)
+
+
+def serialize_ext_recovery(mixdepth, address_type, index):
+ try:
+ res = b''
+ res += mixdepth.to_bytes(1, 'big')
+ res += address_type.to_bytes(1, 'big')
+ res += index.to_bytes(4, 'big')
+ return res
+ except Exception as e:
+ jlog.error(f'serialize_ext_recovery: serialization '
+ f'failed {repr(e)}')
+
+
+def deserialize_ext_recovery(ext_recovery_bytes):
+ try:
+ b = ext_recovery_bytes
+ i = 0
+ mixdepth = int.from_bytes(b[i:i+1], 'big')
+ i += 1
+ address_type = int.from_bytes(b[i:i+1], 'big')
+ i += 1
+ index = int.from_bytes(b[i:i+4], 'big')
+ i += 4
+ assert b[i:] == b''
+ return mixdepth, address_type, index
+ except Exception as e:
+ jlog.error(f'deserialize_ext_recovery: deserialization '
+ f'failed {repr(e)}')
+
+
+class DKGCoordinator:
+
+ def __init__(self, *, mixdepth, address_type, index,
+ session_id, hostpubkey):
+ self.mixdepth = mixdepth
+ self.address_type = address_type
+ self.index = index
+ self.session_id = session_id
+ self.hostpubkey = hostpubkey
+ self.parties = dict()
+ self.sessions = dict()
+ self.state = None
+ self.cmsg2 = None
+ self.ext_recovery = None
+
+
+class DKGSession:
+
+ def __init__(self, *, session_id, hostpubkey,
+ coord_nick, coord_hostpubkey):
+ self.session_id = session_id
+ self.hostpubkey = hostpubkey
+ self.coord_nick = coord_nick
+ self.coord_hostpubkey = coord_hostpubkey
+ self.dkg_init_sec = 0
+ self.state1 = None
+ self.state2 = None
+ self.dkg_output = None
+ self.recovery_data = None
+
+
+COORDINATOR = 'coordinator'
+
+
+class DKGClient:
+
+ DKG_WAIT_SEC = 60
+
+ def __init__(self, wallet_service):
+ self.aborted = False
+ self.testflag = False
+ self.offerlist = []
+ self.jm_up_loop = None
+ self.jm_up = False
+ self.dkg_gen_list = []
+ self.current_dkg_gen = None
+
+ self.wallet_service = wallet_service
+ hostpubkeys = jm_single().config.get('FROST', 'hostpubkeys')
+ self.hostpubkeys = [hextobin(p) for p in hostpubkeys.split(',')]
+ self.t = jm_single().config.getint('FROST', 't')
+ self.session_params = SessionParams(self.hostpubkeys, self.t)
+ self.dkg_coordinators = dict()
+ self.dkg_sessions = dict()
+
+ def on_jm_up(self):
+ self.jm_up = True
+
+ def find_pubkey_by_pubkeyhash(self, pubkeyhash):
+ for pubkey in self.hostpubkeys:
+ if pubkeyhash == sha256(pubkey).hexdigest():
+ return pubkey
+
+ async def dkg_gen(self):
+ if self.dkg_gen_list:
+ self.current_dkg_gen = self.dkg_gen_list[0]
+ else:
+ self.current_dkg_gen = None
+ return self.current_dkg_gen
+
+ def dkg_init(self, mixdepth, address_type, index):
+ try:
+ wallet = self.wallet_service.wallet
+ hostseckey = wallet._hostseckey[:32]
+ hostpubkey = hostpubkey_gen(hostseckey)
+ hostpubkeyhash = sha256(hostpubkey).digest()
+ session_id = sha256(os.urandom(32)).digest()
+ coordinator = DKGCoordinator(mixdepth=mixdepth,
+ address_type=address_type,
+ index=index,
+ session_id=session_id,
+ hostpubkey=hostpubkey)
+ md_type_idx = (coordinator.mixdepth,
+ coordinator.address_type,
+ coordinator.index)
+ ext_recovery_bytes = serialize_ext_recovery(*md_type_idx)
+ coordinator.ext_recovery = self.encrypt_ext_recovery(
+ coordinator, ext_recovery_bytes)
+ self.dkg_coordinators[session_id] = coordinator
+ session = DKGSession(session_id=session_id,
+ hostpubkey=hostpubkey,
+ coord_nick=COORDINATOR,
+ coord_hostpubkey=hostpubkey)
+ self.dkg_sessions[session_id] = session
+ coordinator.parties[hostpubkey] = COORDINATOR
+ coordinator.sessions[hostpubkey] = {}
+ pmsg1 = self.party_step1(session_id, serialize=False)
+ if not pmsg1:
+ raise Exception(f'Can not create pmsg1 for '
+ f'session {session_id.hex()}')
+ coordinator.sessions[hostpubkey]['nick'] = COORDINATOR
+ coordinator.sessions[hostpubkey]['pmsg1'] = pmsg1
+ coin_key = CCoinKey.from_secret_bytes(hostseckey)
+ sig = coin_key.sign_schnorr_no_tweak(session_id)
+ return hostpubkeyhash.hex(), session_id, sig.hex()
+ except Exception as e:
+ jlog.error(f'dkg_init: {repr(e)}')
+ return None, None, None
+
+ def on_dkg_init(self, nick, pubkeyhash, session_id, sig):
+ try:
+ if session_id in self.dkg_sessions:
+ raise Exception(f'session {session_id.hex()} already exists')
+ pubkey = self.find_pubkey_by_pubkeyhash(pubkeyhash)
+ if not pubkey:
+ raise Exception(f'pubkey for {pubkeyhash.hex()} not found')
+ xpubkey = XOnlyPubKey(pubkey[1:])
+ if not xpubkey.verify_schnorr(session_id, hextobin(sig)):
+ raise Exception('signature verification failed')
+ wallet = self.wallet_service.wallet
+ hostseckey = wallet._hostseckey[:32]
+ hostpubkey = hostpubkey_gen(hostseckey)
+ hostpubkeyhash = sha256(hostpubkey).digest()
+ session = DKGSession(session_id=session_id,
+ hostpubkey=hostpubkey,
+ coord_nick=nick,
+ coord_hostpubkey=pubkey)
+ self.dkg_sessions[session_id] = session
+ coin_key = CCoinKey.from_secret_bytes(hostseckey)
+ sig = coin_key.sign_schnorr_no_tweak(session_id)
+ pmsg1 = self.party_step1(session_id)
+ return (nick, hostpubkeyhash.hex(), session_id.hex(),
+ sig.hex(), pmsg1)
+ except Exception as e:
+ jlog.error(f'on_dkg_init: {repr(e)}')
+ return None, None, None, None, None
+
+ def party_step1(self, session_id, *, serialize=True):
+ try:
+ session = self.dkg_sessions.get(session_id)
+ if not session:
+ raise Exception(f'session {session_id.hex()} not found')
+ if session.state1:
+ raise Exception(f'session.state1 already set '
+ f'for {session_id.hex()}')
+ wallet = self.wallet_service.wallet
+ hostseckey = wallet._hostseckey[:32]
+ random = os.urandom(32)
+ session.state1, pmsg1 = participant_step1(
+ hostseckey, self.session_params, random)
+ if serialize:
+ pmsg1 = self.serialize_pmsg1(pmsg1)
+ jlog.debug('party_step1 run')
+ return pmsg1
+ except Exception as e:
+ jlog.error(f'party_step1: {repr(e)}')
+
+ def on_dkg_pmsg1(self, nick, pubkeyhash, session_id, sig, pmsg1):
+ try:
+ coordinator = self.dkg_coordinators.get(session_id)
+ if not coordinator:
+ raise Exception(f'session {session_id.hex()} not found')
+ pubkey = self.find_pubkey_by_pubkeyhash(pubkeyhash)
+ if not pubkey:
+ raise Exception(f'pubkey for {pubkeyhash.hex()} not found')
+ xpubkey = XOnlyPubKey(pubkey[1:])
+ if not xpubkey.verify_schnorr(session_id, hextobin(sig)):
+ raise Exception(f'signature verification failed')
+ if pubkey in coordinator.parties:
+ jlog.debug(f'pubkey {pubkey.hex()} already in'
+ f' coordinator parties')
+ return None, None
+ coordinator.parties[pubkey] = nick
+
+ if not pubkey in coordinator.sessions:
+ coordinator.sessions[pubkey] = {}
+ coordinator.sessions[pubkey]['nick'] = nick
+ coordinator.sessions[pubkey]['pmsg1'] = pmsg1
+
+ ready_list = set()
+ if len(coordinator.sessions) == len(self.hostpubkeys):
+ for session in coordinator.sessions.values():
+ if session['nick'] == COORDINATOR:
+ continue
+ ready_list.add(session['nick'])
+ if ready_list and len(ready_list) == len(self.hostpubkeys) - 1:
+ cmsg1 = self.coordinator_step1(session_id)
+ pmsg2 = self.party_step2(session_id, cmsg1, serialize=False)
+ self.on_dkg_pmsg2(COORDINATOR, session_id, pmsg2)
+ return ready_list, self.serialize_cmsg1(cmsg1)
+ else:
+ return None, None
+ except Exception as e:
+ jlog.error(f'on_dkg_pmsg1: {repr(e)}')
+ return None, None
+
+ def coordinator_step1(self, session_id):
+ try:
+ coordinator = self.dkg_coordinators.get(session_id)
+ if not coordinator:
+ raise Exception(f'session {session_id.hex()} not found')
+ if coordinator.state:
+ raise Exception(f'coordinator.state already set '
+ f'for {session_id.hex()}')
+ pmsgs1 = []
+ for pubkey in self.hostpubkeys:
+ session = coordinator.sessions[pubkey]
+ pmsgs1.append(session['pmsg1'])
+
+ coordinator.state, cmsg1 = coordinator_step1(
+ pmsgs1, self.session_params)
+ jlog.debug('coordinator_step1 run')
+ return cmsg1
+ except Exception as e:
+ jlog.error(f'coordinator_step1: {repr(e)}')
+
+ def party_step2(self, session_id, cmsg1, *, serialize=True):
+ try:
+ session = self.dkg_sessions.get(session_id)
+ if not session:
+ raise Exception(f'session {session_id.hex()} not found')
+ if session.state2:
+ raise Exception(f'session.state2 already set '
+ f'for {session_id.hex()}')
+ wallet = self.wallet_service.wallet
+ hostseckey = wallet._hostseckey[:32]
+ session.state2, pmsg2 = participant_step2(
+ hostseckey, session.state1, cmsg1)
+ if serialize:
+ pmsg2 = self.serialize_pmsg2(pmsg2)
+ jlog.debug('party_step2 run')
+ return pmsg2
+ except Exception as e:
+ jlog.error(f'party_step2: {repr(e)}')
+
+ def on_dkg_pmsg2(self, nick, session_id, pmsg2):
+ try:
+ coordinator = self.dkg_coordinators.get(session_id)
+ if not coordinator:
+ raise Exception(f'session {session_id.hex()} not found')
+ party = None
+ for pubkey in self.hostpubkeys:
+ if nick == coordinator.parties.get(pubkey):
+ party = nick
+ break
+ if not party:
+ raise Exception(f'unknown party {nick}')
+ if not pubkey in coordinator.sessions:
+ raise Exception(f'party pubkey for {nick} not found')
+ if 'pmsg2' in coordinator.sessions[pubkey]:
+ raise Exception(f'pmsg2 already set in coordinator sessions '
+ f'for pubkey {pubkey.hex()}')
+ coordinator.sessions[pubkey]['pmsg2'] = pmsg2
+
+ ready_list = set()
+ if len(coordinator.sessions) == len(self.hostpubkeys):
+ for session in coordinator.sessions.values():
+ if session['nick'] == COORDINATOR:
+ continue
+ if not 'pmsg2' in session:
+ continue
+ ready_list.add(session['nick'])
+ if ready_list and len(ready_list) == len(self.hostpubkeys) - 1:
+ cmsg2 = self.coordinator_step2(session_id)
+ ext_recovery = coordinator.ext_recovery
+ return ready_list, self.serialize_cmsg2(cmsg2), ext_recovery
+ else:
+ return None, None, None
+ except Exception as e:
+ jlog.error(f'on_dkg_pmsg2: {repr(e)}')
+ return None, None, None
+
+ def coordinator_step2(self, session_id):
+ try:
+ coordinator = self.dkg_coordinators.get(session_id)
+ if not coordinator:
+ raise Exception(f'session {session_id.hex()} not found')
+ if coordinator.cmsg2:
+ raise Exception(f'coordinator.cmsg2 already set '
+ f'for {session_id.hex()}')
+ pmsgs2 = []
+ for pubkey in self.hostpubkeys:
+ session = coordinator.sessions[pubkey]
+ pmsgs2.append(session['pmsg2'])
+ cmsg2, dkg_output, recovery_data = coordinator_finalize(
+ coordinator.state, pmsgs2)
+ coordinator.cmsg2 = cmsg2
+ jlog.debug('coordinator_step2 run')
+ return cmsg2
+ except Exception as e:
+ jlog.error(f'coordinator_step2 : {repr(e)}')
+
+ def finalize(self, session_id, cmsg2, ext_recovery):
+ try:
+ session = self.dkg_sessions.get(session_id)
+ if not session:
+ raise Exception(f'session {session_id.hex()} not found')
+ if session.dkg_output:
+ raise Exception(f'session.dkg_output already set '
+ f'for {session_id.hex()}')
+ session.dkg_output, session.recovery_data = participant_finalize(
+ session.state2, cmsg2)
+ jlog.debug('finalize run')
+ dkg_man = self.wallet_service.dkg
+ session_id = session.session_id
+ coordinator = self.dkg_coordinators.get(session_id)
+ coord_hostpubkey = session.coord_hostpubkey
+ if coordinator:
+ dkg_man.add_coordinator_data(
+ session_id=session_id,
+ dkg_output=session.dkg_output,
+ hostpubkeys=self.hostpubkeys,
+ t=self.t,
+ recovery_data=session.recovery_data,
+ ext_recovery=ext_recovery)
+ else:
+ dkg_man.add_party_data(
+ session_id=session_id,
+ dkg_output=session.dkg_output,
+ hostpubkeys=self.hostpubkeys,
+ t=self.t,
+ recovery_data=session.recovery_data,
+ ext_recovery=ext_recovery)
+ self.dkg_sessions.pop(session_id)
+ return True
+ except Exception as e:
+ jlog.error(f'finalize: {repr(e)}')
+ return False
+
+ def on_dkg_finalized(self, nick, session_id):
+ try:
+ coordinator = self.dkg_coordinators.get(session_id)
+ if not coordinator:
+ raise Exception(f'session {session_id.hex()} not found')
+ party = None
+ for pubkey in self.hostpubkeys:
+ if nick == coordinator.parties.get(pubkey):
+ party = nick
+ break
+ if not party:
+ raise Exception(f'unknown party {nick}')
+ if not pubkey in coordinator.sessions:
+ raise Exception(f'party pubkey for {nick} not found')
+ if 'finalized' in coordinator.sessions[pubkey]:
+ raise Exception(f'finalized already set in coordinator '
+ f'sessions for pubkey {pubkey.hex()}')
+ coordinator.sessions[pubkey]['finalized'] = True
+
+ ready_list = set()
+ if len(coordinator.sessions) == len(self.hostpubkeys):
+ for session in coordinator.sessions.values():
+ if session['nick'] == COORDINATOR:
+ continue
+ if not 'finalized' in session:
+ continue
+ ready_list.add(session['nick'])
+ if ready_list and len(ready_list) == len(self.hostpubkeys) - 1:
+ ext_recovery = coordinator.ext_recovery
+ self.finalize(session_id, coordinator.cmsg2, ext_recovery)
+ except Exception as e:
+ jlog.error(f'on_dkg_finalized: {repr(e)}')
+
+ async def wait_on_dkg_output(self, session_id):
+ try:
+ session = self.dkg_sessions.get(session_id)
+ if not session:
+ raise Exception(f'session {session_id.hex()} not found')
+ while True:
+ await asyncio.sleep(1)
+ if session.dkg_output:
+ break
+ waiting_sec = time.time() - session.dkg_init_sec
+ if waiting_sec > self.DKG_WAIT_SEC:
+ raise Exception(f'timed out DKG session '
+ f'{session_id.hex()}')
+ return session.dkg_output.threshold_pubkey
+ except Exception as e:
+ jlog.warn(f'wait_on_dkg_output: {repr(e)}')
+ finally:
+ sess_id = self.dkg_sessions.pop(session_id, None)
+ if not sess_id:
+ jlog.debug(f'wait_on_dkg_output: session {session_id.hex()}'
+ f' not found in the dkg_sessions')
+ sess_id = self.dkg_coordinators.pop(session_id, None)
+ if not sess_id:
+ jlog.debug(f'wait_on_dkg_output: session {session_id.hex()}'
+ f' not found in the dkg_coordinators')
+
+ def encrypt_ext_recovery(self, coordinator, ext_recovery_bytes):
+ try:
+ pubkey = coordinator.hostpubkey
+ return btc.ecies_encrypt(ext_recovery_bytes, pubkey)
+ except Exception as e:
+ jlog.error(f'enc_ext_recovery: {repr(e)}')
+
+ def serialize_pmsg1(self, pmsg1):
+ try:
+ enc_pmsg = pmsg1.enc_pmsg
+ simpl_pmsg = enc_pmsg.simpl_pmsg
+ com = simpl_pmsg.com
+ pop = simpl_pmsg.pop
+ ges = com.ges
+ pubnonce = enc_pmsg.pubnonce
+ enc_shares = enc_pmsg.enc_shares
+
+ res = b''
+ res += len(ges).to_bytes(2, 'big')
+ for ge in ges:
+ res += ge.to_bytes_compressed()
+ res += bytes(pop)
+ res += pubnonce
+ res += len(enc_shares).to_bytes(2, 'big')
+ for es in enc_shares:
+ res += es.to_bytes()
+ return res
+ except Exception as e:
+ jlog.error(f'serialize_pmsg1: serialization failed {repr(e)}')
+
+ def deserialize_pmsg1(self, pmsg1_bytes):
+ try:
+ b = pmsg1_bytes
+ i = 0
+
+ ges_len = int.from_bytes(b[i:i+2], 'big')
+ i += 2
+
+ ges = []
+ for j in range(ges_len):
+ ge = secp256k1.GE.from_bytes_compressed(b[i:i+33])
+ ges.append(ge)
+ i += 33
+
+ pop = simplpedpop.Pop(b[i:i+64])
+ i += 64
+
+ pubnonce = b[i:i+33]
+ i += 33
+
+ enc_shares_len = int.from_bytes(b[i:i+2], 'big')
+ i += 2
+
+ enc_shares = []
+ for j in range(enc_shares_len):
+ es = secp256k1.Scalar.from_bytes(b[i:i+32])
+ enc_shares.append(es)
+ i += 32
+
+ assert b[i:] == b''
+
+ com = vss.VSSCommitment(ges)
+ simpl_pmsg = simplpedpop.ParticipantMsg(com, pop)
+ enc_pmsg = encpedpop.ParticipantMsg(simpl_pmsg, pubnonce,
+ enc_shares)
+ return ParticipantMsg1(enc_pmsg)
+ except Exception as e:
+ jlog.error(f'deserialize_pmsg1: deserialization failed {repr(e)}')
+
+ def serialize_pmsg2(self, pmsg2):
+ try:
+ return b'' + pmsg2.sig
+ except Exception as e:
+ jlog.error(f'serialize_pmsg2: serialization failed {repr(e)}')
+
+ def deserialize_pmsg2(self, pmsg2_bytes):
+ try:
+ return ParticipantMsg2(pmsg2_bytes)
+ except Exception as e:
+ jlog.error(f'deserialize_pmsg2: deserialization failed {repr(e)}')
+
+ def serialize_cmsg1(self, cmsg1):
+ try:
+ enc_cmsg = cmsg1.enc_cmsg
+ simpl_cmsg = enc_cmsg.simpl_cmsg
+ coms_to_secrets = simpl_cmsg.coms_to_secrets
+ sum_coms_to_nonconst_terms = simpl_cmsg.sum_coms_to_nonconst_terms
+ pops = simpl_cmsg.pops
+ pubnonces = enc_cmsg.pubnonces
+ enc_secshares = cmsg1.enc_secshares
+
+ res = b''
+ res += len(coms_to_secrets).to_bytes(2, 'big')
+ for cts in coms_to_secrets:
+ res += cts.to_bytes_compressed()
+ res += len(sum_coms_to_nonconst_terms).to_bytes(2, 'big')
+ for sctnct in sum_coms_to_nonconst_terms:
+ res += sctnct.to_bytes_compressed()
+ res += len(pops).to_bytes(2, 'big')
+ for pop in pops:
+ res += bytes(pop)
+ res += len(pubnonces).to_bytes(2, 'big')
+ for pubnonce in pubnonces:
+ res += pubnonce
+ res += len(enc_secshares).to_bytes(2, 'big')
+ for es in enc_secshares:
+ res += es.to_bytes()
+ return res
+ except Exception as e:
+ jlog.error(f'serialize_cmsg1: serialization failed {repr(e)}')
+
+ def deserialize_cmsg1(self, cmsg1_bytes):
+ try:
+ b = cmsg1_bytes
+ i = 0
+
+ coms_to_secrets_len = int.from_bytes(b[i:i+2], 'big')
+ i += 2
+ coms_to_secrets = []
+ for j in range(coms_to_secrets_len):
+ cts = secp256k1.GE.from_bytes_compressed(b[i:i+33])
+ coms_to_secrets.append(cts)
+ i += 33
+
+ sum_coms_to_nonconst_terms_len = int.from_bytes(b[i:i+2], 'big')
+ i += 2
+ sum_coms_to_nonconst_terms = []
+ for j in range(sum_coms_to_nonconst_terms_len):
+ sctnct = secp256k1.GE.from_bytes_compressed(b[i:i+33])
+ sum_coms_to_nonconst_terms.append(sctnct)
+ i += 33
+
+ pops_len = int.from_bytes(b[i:i+2], 'big')
+ i += 2
+ pops = []
+ for j in range(pops_len):
+ pop = simplpedpop.Pop(b[i:i+64])
+ pops.append(pop)
+ i += 64
+
+ pubnonces_len = int.from_bytes(b[i:i+2], 'big')
+ i += 2
+ pubnonces = []
+ for j in range(pubnonces_len):
+ pubnonce = b[i:i+33]
+ pubnonces.append(pubnonce)
+ i += 33
+
+ enc_secshares_len = int.from_bytes(b[i:i+2], 'big')
+ i += 2
+ enc_secshares = []
+ for j in range(enc_secshares_len):
+ es = secp256k1.Scalar.from_bytes(b[i:i+32])
+ enc_secshares.append(es)
+ i += 32
+
+ assert b[i:] == b''
+
+ simpl_cmsg = simplpedpop.CoordinatorMsg(
+ coms_to_secrets, sum_coms_to_nonconst_terms, pops)
+ enc_cmsg = encpedpop.CoordinatorMsg(simpl_cmsg, pubnonces)
+ return CoordinatorMsg1(enc_cmsg, enc_secshares)
+ except Exception as e:
+ jlog.error(f'deserialize_cmsg1: deserialization failed {repr(e)}')
+
+ def serialize_cmsg2(self, cmsg2):
+ try:
+ return b'' + cmsg2.cert
+ except Exception as e:
+ jlog.error(f'serialize_cmsg2: serialization failed {repr(e)}')
+
+ def deserialize_cmsg2(self, cmsg2_bytes):
+ try:
+ return CoordinatorMsg2(cmsg2_bytes)
+ except Exception as e:
+ jlog.error(f'deserialize_cmsg2: deserialization failed {repr(e)}')
+
+
+class FROSTCoordinator:
+
+ def __init__(self, *, session_id, hostpubkey, dkg_session_id, msg):
+ self.session_id = session_id
+ self.frost_init_sec = 0
+ self.hostpubkey = hostpubkey
+ self.dkg_session_id = dkg_session_id
+ self.msg = msg
+ self.parties = dict()
+ self.sessions = dict()
+ self.nonce_agg = None
+ self.ids = []
+ self.sig = None
+ self.tweaked_pubkey = None
+
+ def __str__(self):
+ return self.__repr__()
+
+ def __repr__(self):
+ return (f'FROSTCoordinator(session_id={self.session_id}, '
+ f'frost_init_sec={self.frost_init_sec}, '
+ f'hostpubkey={self.hostpubkey}, '
+ f'dkg_session_id={self.dkg_session_id}, '
+ f'msg={self.msg}, '
+ f'parties={self.parties}, '
+ f'sessions={self.sessions}, '
+ f'nonce_agg={self.nonce_agg}, '
+ f'ids={self.ids}, '
+ f'sig={self.sig})')
+
+
+class FROSTSession:
+
+ def __init__(self, *, session_id, hostpubkey,
+ coord_nick, coord_hostpubkey):
+ self.session_id = session_id
+ self.hostpubkey = hostpubkey
+ self.coord_nick = coord_nick
+ self.coord_hostpubkey = coord_hostpubkey
+ self.sec_nonce = None
+ self.pub_nonce = None
+ self.partial_sig = None
+
+ def __str__(self):
+ return self.__repr__()
+
+ def __repr__(self):
+ return (f'FROSTSession(session_id={self.session_id}, '
+ f'hostpubkey={self.hostpubkey}, '
+ f'coord_nick={self.coord_nick}, '
+ f'coord_hostpubkey={self.coord_hostpubkey}, '
+ f'sec_nonce={self.sec_nonce}, '
+ f'pub_nonce={self.pub_nonce}, '
+ f'partial_sig={self.partial_sig})')
+
+
+class FROSTClient(DKGClient):
+
+ FROST_WAIT_SEC = 60
+
+ def __init__(self, wallet_service):
+ super().__init__(wallet_service)
+ self.frost_coordinators = dict()
+ self.frost_sessions = dict()
+
+ def frost_init(self, dkg_session_id, msg_bytes):
+ try:
+ wallet = self.wallet_service.wallet
+ hostseckey = wallet._hostseckey[:32]
+ hostpubkey = hostpubkey_gen(hostseckey)
+ self.my_id = None
+ for i, p in enumerate(self.hostpubkeys):
+ if p == hostpubkey:
+ self.my_id = (i+1).to_bytes(32, 'big')
+ break
+ assert self.my_id is not None
+ hostpubkeyhash = sha256(hostpubkey).digest()
+ session_id = sha256(os.urandom(32)).digest()
+ coordinator = FROSTCoordinator(session_id=session_id,
+ hostpubkey=hostpubkey,
+ dkg_session_id=dkg_session_id,
+ msg=msg_bytes)
+ self.frost_coordinators[session_id] = coordinator
+ session = FROSTSession(session_id=session_id,
+ hostpubkey=hostpubkey,
+ coord_nick=COORDINATOR,
+ coord_hostpubkey=hostpubkey)
+ self.frost_sessions[session_id] = session
+ coordinator.parties[hostpubkey] = COORDINATOR
+ coordinator.sessions[hostpubkey] = {}
+ coordinator.sessions[hostpubkey]['nick'] = COORDINATOR
+ pub_nonce = self.frost_round1(session_id)
+ if not pub_nonce:
+ raise Exception(f'Can not create pub_nonce for '
+ f'session {session_id.hex()}')
+ coordinator.sessions[hostpubkey]['pub_nonce'] = pub_nonce
+ coin_key = CCoinKey.from_secret_bytes(hostseckey)
+ sig = coin_key.sign_schnorr_no_tweak(session_id)
+ return hostpubkeyhash.hex(), session_id, sig.hex()
+ except Exception as e:
+ jlog.error(f'frost_init: {repr(e)}')
+ return None, None, None
+
+ def on_frost_init(self, nick, pubkeyhash, session_id, sig):
+ try:
+ if session_id in self.frost_sessions:
+ raise Exception(f'session {session_id.hex()} already exists')
+ pubkey = self.find_pubkey_by_pubkeyhash(pubkeyhash)
+ if not pubkey:
+ raise Exception(f'pubkey for {pubkeyhash.hex()} not found')
+ xpubkey = XOnlyPubKey(pubkey[1:])
+ if not xpubkey.verify_schnorr(session_id, hextobin(sig)):
+ raise Exception('signature verification failed')
+ wallet = self.wallet_service.wallet
+ hostseckey = wallet._hostseckey[:32]
+ hostpubkey = hostpubkey_gen(hostseckey)
+ self.my_id = None
+ for i, p in enumerate(self.hostpubkeys):
+ if p == hostpubkey:
+ self.my_id = (i+1).to_bytes(32, 'big')
+ break
+ assert self.my_id is not None
+ hostpubkeyhash = sha256(hostpubkey).digest()
+ session = FROSTSession(session_id=session_id,
+ hostpubkey=hostpubkey,
+ coord_nick=nick,
+ coord_hostpubkey=pubkey)
+ self.frost_sessions[session_id] = session
+ coin_key = CCoinKey.from_secret_bytes(hostseckey)
+ sig = coin_key.sign_schnorr_no_tweak(session_id)
+ pub_nonce = self.frost_round1(session_id)
+ return (nick, hostpubkeyhash.hex(), session_id.hex(),
+ sig.hex(), pub_nonce)
+ except Exception as e:
+ jlog.error(f'on_frost_init: {repr(e)}')
+ return None, None, None, None, None
+
+ def frost_round1(self, session_id):
+ try:
+ session = self.frost_sessions.get(session_id)
+ if not session:
+ raise Exception(f'session {session_id.hex()} not found')
+ if session.sec_nonce:
+ raise Exception(f'session.sec_nonce already set '
+ f'for {session_id.hex()}')
+ session.sec_nonce, session.pub_nonce = frost.nonce_gen(
+ secshare=None, pubshare=None, group_pk=None, msg=None,
+ extra_in=None)
+ jlog.debug('frost_round1 run')
+ return session.pub_nonce
+ except Exception as e:
+ jlog.error(f'frost_round1: {repr(e)}')
+
+ def on_frost_round1(self, nick, pubkeyhash, session_id, sig, pub_nonce):
+ try:
+ coordinator = self.frost_coordinators.get(session_id)
+ if not coordinator:
+ raise Exception(f'session {session_id.hex()} not found')
+ if len(coordinator.sessions) == self.t:
+ jlog.debug('on_frost_round1: miminum pub_nonce set already '
+ 'presented, ignoring additional pub_nonce')
+ return None, None, None, None, None
+ pubkey = self.find_pubkey_by_pubkeyhash(pubkeyhash)
+ if not pubkey:
+ raise Exception(f'pubkey for {pubkeyhash.hex()} not found')
+ xpubkey = XOnlyPubKey(pubkey[1:])
+ if not xpubkey.verify_schnorr(session_id, hextobin(sig)):
+ raise Exception(f'signature verification failed')
+ if pubkey in coordinator.parties:
+ jlog.debug(f'pubkey {pubkey.hex()} already in'
+ f' coordinator parties')
+ return None, None, None, None, None
+ coordinator.parties[pubkey] = nick
+
+ if not pubkey in coordinator.sessions:
+ coordinator.sessions[pubkey] = {}
+ coordinator.sessions[pubkey]['nick'] = nick
+ coordinator.sessions[pubkey]['pub_nonce'] = pub_nonce
+
+ ready_list = set()
+ if len(coordinator.sessions) == self.t:
+ for session in coordinator.sessions.values():
+ if session['nick'] == COORDINATOR:
+ continue
+ ready_list.add(session['nick'])
+ if ready_list and len(ready_list) == self.t - 1:
+ coordinator.nonce_agg, dkg_session_id, ids, msg = \
+ self.frost_agg1(session_id)
+ partial_sig = self.frost_round2(
+ session_id, coordinator.nonce_agg,
+ dkg_session_id, ids, msg)
+ self.on_frost_round2(
+ COORDINATOR, session_id, partial_sig)
+ return (ready_list, coordinator.nonce_agg,
+ dkg_session_id, ids, msg)
+ else:
+ return None, None, None, None, None
+ except Exception as e:
+ jlog.error(f'on_frost_round1: {repr(e)}')
+ return None, None, None, None, None
+
+ def frost_agg1(self, session_id):
+ try:
+ coordinator = self.frost_coordinators.get(session_id)
+ if not coordinator:
+ raise Exception(f'session {session_id.hex()} not found')
+ if coordinator.nonce_agg:
+ raise Exception(f'coordinator.nonce_agg already set '
+ f'for {session_id.hex()}')
+ pub_nonces = []
+ ids = []
+ for i, pubkey in enumerate(self.hostpubkeys):
+ session = coordinator.sessions.get(pubkey)
+ if not session:
+ continue
+ pub_nonce = session.get('pub_nonce')
+ if not pub_nonce:
+ continue
+ pub_nonces.append(pub_nonce)
+ ids.append(i+1)
+ coordinator.ids = ids.copy()
+ ids_bytes = []
+ for i in ids:
+ ids_bytes.append(i.to_bytes(32, 'big'))
+ assert len(ids) == self.t
+ coordinator.nonce_agg = frost.nonce_agg(pub_nonces, ids_bytes)
+ jlog.debug('frost_agg1 run')
+ return (coordinator.nonce_agg, coordinator.dkg_session_id, ids,
+ coordinator.msg)
+ except Exception as e:
+ jlog.error(f'frost_agg1: {repr(e)}')
+ return None, None, None, None
+
+ def frost_round2(self, session_id, nonce_agg, dkg_session_id, ids, msg):
+ try:
+ session = self.frost_sessions.get(session_id)
+ if not session:
+ raise Exception(f'session {session_id.hex()} not found')
+ if session.partial_sig:
+ raise Exception(f'session.partial_sig already set '
+ f'for {session_id.hex()}')
+ dkg = self.wallet_service.wallet.dkg
+ secshare = dkg._dkg_secshare.get(dkg_session_id)
+ if not secshare:
+ raise Exception(f'secshare not found for '
+ f'{dkg_session_id.hex()}')
+ _pubshares = dkg._dkg_pubshares.get(dkg_session_id)
+ if not _pubshares:
+ raise Exception(f'pubshares not found for '
+ f'{dkg_session_id.hex()}')
+ pubshares = []
+ for i, pubshare in enumerate(_pubshares):
+ if (i+1) not in ids:
+ continue
+ pubshares.append(pubshare)
+ ids_bytes = []
+ for i in ids:
+ ids_bytes.append(i.to_bytes(32, 'big'))
+
+ tweak = calc_tweak(pubshares, ids_bytes)
+ tweaks = [tweak]
+ is_xonly = [True]
+ session_ctx = frost.SessionContext(
+ nonce_agg, ids_bytes, pubshares, tweaks, is_xonly, msg)
+ session.partial_sig = partial_sig = frost.sign(
+ session.sec_nonce, secshare, self.my_id, session_ctx)
+ jlog.debug('frost_round2 run')
+ return partial_sig
+ except Exception as e:
+ jlog.error(f'frost_round2: {repr(e)}')
+
+ def on_frost_round2(self, nick, session_id, partial_sig):
+ try:
+ coordinator = self.frost_coordinators.get(session_id)
+ if not coordinator:
+ raise Exception(f'session {session_id.hex()} not found')
+ party = None
+ for pubkey in self.hostpubkeys:
+ if nick == coordinator.parties.get(pubkey):
+ party = nick
+ break
+ if not party:
+ raise Exception(f'unknown party {nick}')
+ if not pubkey in coordinator.sessions:
+ raise Exception(f'party pubkey for {nick} not found')
+ if 'partial_sig' in coordinator.sessions[pubkey]:
+ raise Exception(f'partial_sig already set in coordinator '
+ f'sessions for pubkey {pubkey.hex()}')
+ coordinator.sessions[pubkey]['partial_sig'] = partial_sig
+
+ ready_list = set()
+ if len(coordinator.sessions) == self.t:
+ for session in coordinator.sessions.values():
+ if session['nick'] == COORDINATOR:
+ continue
+ if not 'partial_sig' in session:
+ continue
+ ready_list.add(session['nick'])
+ if ready_list and len(ready_list) == self.t - 1:
+ dkg_session_id = coordinator.dkg_session_id
+ dkg = self.wallet_service.wallet.dkg
+ _pubshares = dkg._dkg_pubshares.get(dkg_session_id)
+ if not _pubshares:
+ raise Exception(f'pubshares not found for '
+ f'{dkg_session_id.hex()}')
+ ids = coordinator.ids
+ ids_bytes = []
+ pubshares = []
+ for i, pubshare in enumerate(_pubshares):
+ if (i+1) not in ids:
+ continue
+ pubshares.append(pubshare)
+ ids_bytes.append((i+1).to_bytes(32, 'big'))
+ tweak = calc_tweak(pubshares, ids_bytes)
+ tweaks = [tweak]
+ is_xonly = [True]
+ session_ctx = frost.SessionContext(
+ coordinator.nonce_agg, ids_bytes, pubshares, tweaks,
+ is_xonly, coordinator.msg)
+ partial_sigs = []
+ for pubkey in self.hostpubkeys:
+ session = coordinator.sessions.get(pubkey)
+ if not session:
+ continue
+ if 'partial_sig' in session:
+ partial_sigs.append(session['partial_sig'])
+ sig = frost.partial_sig_agg(
+ partial_sigs, ids_bytes, session_ctx)
+ tweak_ctx = frost.group_pubkey_and_tweak(
+ pubshares, ids_bytes, tweaks, is_xonly)
+ Q , _, _ = tweak_ctx
+ tweaked_pubkey = frost.xbytes(Q)
+ if not schnorr_verify(coordinator.msg, tweaked_pubkey, sig):
+ raise Exception(f'on_frost_round2: schnorr_verify failed '
+ f'for {dkg_session_id.hex()}')
+ coordinator.sig = sig
+ coordinator.tweaked_pubkey = frost.cbytes(Q)
+ return sig
+ else:
+ return None
+ except Exception as e:
+ jlog.error(f'on_frost_round2: {repr(e)}')
+ return None
+
+ async def wait_on_sig(self, session_id):
+ try:
+ coordinator = self.frost_coordinators.get(session_id)
+ if not coordinator:
+ raise Exception(f'session {session_id.hex()} not found')
+ while True:
+ await asyncio.sleep(1)
+ if coordinator.sig:
+ break
+ waiting_sec = time.time() - coordinator.frost_init_sec
+ if waiting_sec > self.FROST_WAIT_SEC:
+ raise Exception(f'timed out FROST session '
+ f'{session_id.hex()}')
+ return coordinator.sig, coordinator.tweaked_pubkey
+ except Exception as e:
+ jlog.error(f'wait_on_sig: {repr(e)}')
+ return None, repr(e)
+ finally:
+ sess_id = self.frost_sessions.pop(session_id, None)
+ if not sess_id:
+ jlog.debug(f'wait_on_sig: session {session_id.hex()} not found'
+ f' in the frost_sessions')
+ sess_id = self.frost_coordinators.pop(session_id, None)
+ if not sess_id:
+ jlog.debug(f'wait_on_sig: session {session_id.hex()} not found'
+ f' in the frost_coordinators')
diff --git a/src/jmclient/frost_ipc.py b/src/jmclient/frost_ipc.py
new file mode 100644
index 0000000..1d9767e
--- /dev/null
+++ b/src/jmclient/frost_ipc.py
@@ -0,0 +1,244 @@
+# -*- coding: utf-8 -*-
+
+import asyncio
+import pickle
+
+import jmbitcoin as btc
+from jmbase.support import jmprint, EXIT_FAILURE, twisted_sys_exit, get_log
+
+
+jlog = get_log()
+
+
+class IPCBase:
+
+ def encrypt_msg(self, msg_dict):
+ msg_bytes = pickle.dumps(msg_dict)
+ return btc.ecies_encrypt(msg_bytes, self.pubkey) + b'\n'
+
+ def decrypt_msg(self, enc_bytes):
+ msg_bytes = btc.ecies_decrypt(self.wallet._hostseckey, enc_bytes)
+ return pickle.loads(msg_bytes)
+
+
+class FrostIPCServer(IPCBase):
+
+ def __init__(self, wallet):
+ self.loop = asyncio.get_event_loop()
+ self.wallet = wallet
+ self.pubkey = btc.privkey_to_pubkey(wallet._hostseckey)
+ self.sock_path = f'{wallet._storage.get_location()}.sock'
+ self.srv = None
+ self.sr = None
+ self.sw = None
+ self.tasks = set()
+
+ async def async_init(self):
+ self.srv = await asyncio.start_unix_server(
+ self.handle_connection, self.sock_path)
+
+ async def serve_forever(self):
+ return await self.srv.serve_forever()
+
+ async def handle_connection(self, sr, sw):
+ if self.sr or self.sw:
+ jlog.error('FrostIPCServer.handle_connection: client '
+ 'already connected, ignore other connection attempt')
+ return
+ jlog.info('FrostIPCServer.handle_connection: connected new client')
+ self.sr = sr
+ self.sw = sw
+ await self.process_msgs()
+
+ async def process_msgs(self):
+ while True:
+ try:
+ line_data = await self.sr.readline()
+ if not line_data:
+ if self.sr.at_eof():
+ jlog.info('FrostIPCServer.process_msg: '
+ 'client disconnected')
+ self.sr = None
+ self.sw = None
+ while self.tasks:
+ task = self.tasks.pop()
+ task.cancel()
+ break
+ else:
+ jlog.error('FrostIPCServer.process_msg: '
+ 'empty line ignored')
+ continue
+ enc_bytes = line_data.strip()
+ msg_dict = self.decrypt_msg(enc_bytes)
+ msg_id = msg_dict['msg_id']
+ cmd = msg_dict['cmd']
+ data = msg_dict['data']
+ task = None
+ if cmd == 'get_dkg_pubkey':
+ task = self.loop.create_task(
+ self.on_get_dkg_pubkey(msg_id, *data))
+ elif cmd == 'frost_sign':
+ task = self.loop.create_task(
+ self.on_frost_sign(msg_id, *data))
+ if task:
+ self.tasks.add(task)
+ except Exception as e:
+ jlog.error(f'FrostIPCServer.process_msgs: {repr(e)}')
+ await asyncio.sleep(0.1)
+
+ async def on_get_dkg_pubkey(self, msg_id, mixdepth, address_type, index):
+ try:
+ wallet = self.wallet
+ dkg = wallet.dkg
+ new_pubkey = dkg.find_dkg_pubkey(mixdepth, address_type, index)
+ if new_pubkey is None:
+ client = wallet.client_factory.getClient()
+ frost_client = wallet.client_factory.client
+ frost_client.dkg_gen_list.append(
+ (mixdepth, address_type, index))
+ await client.dkg_gen()
+ new_pubkey = dkg.find_dkg_pubkey(mixdepth, address_type, index)
+ if new_pubkey:
+ await self.send_dkg_pubkey(msg_id, new_pubkey)
+ except Exception as e:
+ jlog.error(f'FrostIPCServer.on_get_dkg_pubkey: {repr(e)}')
+
+ async def send_dkg_pubkey(self, msg_id, pubkey):
+ try:
+ msg_dict = {
+ 'msg_id': msg_id,
+ 'cmd': 'dkg_pubkey',
+ 'data': pubkey,
+ }
+ self.sw.write(self.encrypt_msg(msg_dict))
+ await self.sw.drain()
+ except Exception as e:
+ jlog.error(f'FrostIPCServer.send_dkg_pubkey: {repr(e)}')
+
+ async def on_frost_sign(self, msg_id, mixdepth, address_type, index,
+ sighash):
+ try:
+ wallet = self.wallet
+ client = wallet.client_factory.getClient()
+ frost_client = wallet.client_factory.client
+ dkg = wallet.dkg
+ dkg_session_id = dkg.find_session(mixdepth, address_type, index)
+ session_id, _, _ = client.frost_init(dkg_session_id, sighash)
+ sig, tweaked_pubkey = await frost_client.wait_on_sig(session_id)
+ pubkey = dkg.find_dkg_pubkey(mixdepth, address_type, index)
+ await self.send_frost_sig(msg_id, sig, pubkey, tweaked_pubkey)
+ except Exception as e:
+ jlog.error(f'FrostIPCServer.on_frost_sign: {repr(e)}')
+
+ async def send_frost_sig(self, msg_id, sig, pubkey, tweaked_pubkey):
+ try:
+ msg_dict = {
+ 'msg_id': msg_id,
+ 'cmd': 'frost_sig',
+ 'data': (sig, pubkey, tweaked_pubkey),
+ }
+ self.sw.write(self.encrypt_msg(msg_dict))
+ await self.sw.drain()
+ except Exception as e:
+ jlog.error(f'FrostIPCServer.send_frost_sig: {repr(e)}')
+
+
+class FrostIPCClient(IPCBase):
+
+ def __init__(self, wallet):
+ self.loop = asyncio.get_event_loop()
+ self.msg_id = 0
+ self.msg_futures = {}
+ self.wallet = wallet
+ self.pubkey = btc.privkey_to_pubkey(wallet._hostseckey)
+ self.sock_path = f'{wallet._storage.get_location()}.sock'
+ self.sr = None
+ self.sw = None
+
+ async def async_init(self):
+ try:
+ self.sr, self.sw = await asyncio.open_unix_connection(
+ self.sock_path)
+ self.loop.create_task(self.process_msgs())
+ except ConnectionRefusedError as e:
+ jmprint('No servefrost socket found. Run wallet-tool.py '
+ 'wallet.jmdat servefrost in separate console.', "error")
+ twisted_sys_exit(EXIT_FAILURE)
+
+ async def process_msgs(self):
+ while True:
+ try:
+ line_data = await self.sr.readline()
+ if not line_data:
+ if self.sr.at_eof():
+ jlog.info('FrostIPCClient.process_msg: '
+ 'client disconnected')
+ self.sr = None
+ self.sw = None
+ for msg_id, fut in list(self.msg_futures.items()):
+ fut = self.msg_futures.pop(msg_id)
+ fut.cancel()
+ break
+ else:
+ jlog.error('FrostIPCClient.process_msg: '
+ 'empty line ignored')
+ continue
+ enc_bytes = line_data.strip()
+ msg_dict = self.decrypt_msg(enc_bytes)
+ msg_id = msg_dict['msg_id']
+ cmd = msg_dict['cmd']
+ data = msg_dict['data']
+ if cmd in ['dkg_pubkey', 'frost_sig']:
+ await self.on_response(msg_id, data)
+ except Exception as e:
+ jlog.error(f'FrostIPCClient.process_msgs: {repr(e)}')
+ await asyncio.sleep(0.1)
+
+ async def on_response(self, msg_id, data):
+ fut = self.msg_futures.pop(msg_id, None)
+ if fut:
+ fut.set_result(data)
+
+ async def get_dkg_pubkey(self, mixdepth, address_type, index):
+ jlog.debug(f'FrostIPCClient.get_dkg_pubkey for mixdepth={mixdepth}, '
+ f'address_type={address_type}, index={index}')
+ try:
+ self.msg_id += 1
+ msg_dict = {
+ 'msg_id': self.msg_id,
+ 'cmd': 'get_dkg_pubkey',
+ 'data': (mixdepth, address_type, index),
+ }
+ self.sw.write(self.encrypt_msg(msg_dict))
+ await self.sw.drain()
+ fut = self.loop.create_future()
+ self.msg_futures[self.msg_id] = fut
+ await fut
+ pubkey = fut.result()
+ jlog.debug('FrostIPCClient.get_dkg_pubkey successfully got pubkey')
+ return pubkey
+ except Exception as e:
+ jlog.error(f'FrostIPCClient.get_dkg_pubkey: {repr(e)}')
+
+ async def frost_sign(self, mixdepth, address_type, index, sighash):
+ jlog.debug(f'FrostIPCClient.frost_sign for mixdepth={mixdepth}, '
+ f'address_type={address_type}, index={index}, '
+ f'sighash={sighash.hex()}')
+ try:
+ self.msg_id += 1
+ msg_dict = {
+ 'msg_id': self.msg_id,
+ 'cmd': 'frost_sign',
+ 'data': (mixdepth, address_type, index, sighash),
+ }
+ self.sw.write(self.encrypt_msg(msg_dict))
+ await self.sw.drain()
+ fut = self.loop.create_future()
+ self.msg_futures[self.msg_id] = fut
+ await fut
+ sig, pubkey, tweaked_pubkey = fut.result()
+ jlog.debug('FrostIPCClient.frost_sign successfully got signature')
+ return sig, pubkey, tweaked_pubkey
+ except Exception as e:
+ jlog.error(f'FrostIPCClient.frost_sign: {repr(e)}')
+ return None, None, None
diff --git a/src/jmclient/maker.py b/src/jmclient/maker.py
index 73d0e4a..72c5eb0 100644
--- a/src/jmclient/maker.py
+++ b/src/jmclient/maker.py
@@ -1,10 +1,15 @@
import base64
+import hashlib
import sys
import abc
import atexit
+from bitcointx.wallet import CCoinKey, XOnlyPubKey, tap_tweak_pubkey
+
import jmbitcoin as btc
-from jmbase import bintohex, hexbin, get_log, EXIT_FAILURE
+from jmbase import (bintohex, hexbin, async_hexbin, get_log, EXIT_FAILURE,
+ twisted_sys_exit)
+from jmclient.wallet import TaprootWallet, FrostWallet
from jmclient.wallet_service import WalletService
from jmclient.configure import jm_single
from jmclient.support import calc_cj_fee
@@ -29,7 +34,7 @@ class Maker(object):
self.sync_wait_loop.start(2.0, now=False)
self.aborted = False
- def try_to_create_my_orders(self):
+ async def try_to_create_my_orders(self):
"""Because wallet syncing is not synchronous(!),
we cannot calculate our offers until we know the wallet
contents, so poll until BlockchainInterface.wallet_synced
@@ -38,14 +43,14 @@ class Maker(object):
"""
if not self.wallet_service.synced:
return
- self.freeze_timelocked_utxos()
+ await self.freeze_timelocked_utxos()
try:
self.offerlist = self.create_my_orders()
except AssertionError:
jlog.error("Failed to create offers.")
self.aborted = True
return
- self.fidelity_bond = self.get_fidelity_bond_template()
+ self.fidelity_bond = await self.get_fidelity_bond_template()
self.sync_wait_loop.stop()
if not self.offerlist:
jlog.error("Failed to create offers.")
@@ -53,8 +58,9 @@ class Maker(object):
return
jlog.info('offerlist={}'.format(self.offerlist))
- @hexbin
- def on_auth_received(self, nick, offer, commitment, cr, amount, kphex):
+ @async_hexbin
+ async def on_auth_received(self, nick, offer, commitment,
+ cr, amount, kphex):
"""Receives data on proposed transaction offer from daemon, verifies
commitment, returns necessary data to send ioauth message (utxos etc)
"""
@@ -106,7 +112,7 @@ class Maker(object):
# authorisation of taker passed
# Find utxos for the transaction now:
- utxos, cj_addr, change_addr = self.oid_to_order(offer, amount)
+ utxos, cj_addr, change_addr = await self.oid_to_order(offer, amount)
if not utxos:
#could not find funds
return (False,)
@@ -116,15 +122,38 @@ class Maker(object):
# Need to choose an input utxo pubkey to sign with
# Just choose the first utxo in utxos and retrieve key from wallet.
auth_address = next(iter(utxos.values()))['address']
- auth_key = self.wallet_service.get_key_from_addr(auth_address)
- auth_pub = btc.privkey_to_pubkey(auth_key)
- # kphex was auto-converted by @hexbin but we actually need to sign the
- # hex version to comply with pre-existing JM protocol:
- btc_sig = btc.ecdsa_sign(bintohex(kphex), auth_key)
- return (True, utxos, auth_pub, cj_addr, change_addr, btc_sig)
+ wallet = self.wallet_service.wallet
+ if isinstance(wallet, FrostWallet):
+ path = wallet.addr_to_path(auth_address)
+ md, address_type, index = wallet.get_details(path)
+ kphex_hash = hashlib.sha256(bintohex(kphex).encode()).digest()
+ sig, _, tweaked_pubkey = await wallet.ipc_client.frost_sign(
+ md, address_type, index, kphex_hash)
+ sig = base64.b64encode(sig).decode('ascii')
+ if not sig:
+ return reject(str(tweaked_pubkey))
+ return (True, utxos, tweaked_pubkey[1:], cj_addr, change_addr, sig)
+ elif isinstance(wallet, TaprootWallet):
+ auth_key = self.wallet_service.get_key_from_addr(auth_address)
+ auth_pub = btc.privkey_to_pubkey(auth_key)
+ coin_key = CCoinKey.from_secret_bytes(auth_key[:32])
+ kphex_hash = hashlib.sha256(bintohex(kphex).encode()).digest()
+ sig = coin_key.sign_schnorr_tweaked(kphex_hash)
+ sig = base64.b64encode(sig).decode('ascii')
+ auth_pub_tweaked = tap_tweak_pubkey(XOnlyPubKey(auth_pub))
+ if auth_pub_tweaked is not None:
+ auth_pub_tweaked = bytes(auth_pub_tweaked[0])
+ return (True, utxos, auth_pub_tweaked, cj_addr, change_addr, sig)
+ else:
+ auth_key = self.wallet_service.get_key_from_addr(auth_address)
+ auth_pub = btc.privkey_to_pubkey(auth_key)
+ # kphex was auto-converted by @hexbin but we actually need to sign the
+ # hex version to comply with pre-existing JM protocol:
+ btc_sig = btc.ecdsa_sign(bintohex(kphex), auth_key)
+ return (True, utxos, auth_pub, cj_addr, change_addr, btc_sig)
- @hexbin
- def on_tx_received(self, nick, tx, offerinfo):
+ @async_hexbin
+ async def on_tx_received(self, nick, tx, offerinfo):
"""Called when the counterparty has sent an unsigned
transaction. Sigs are created and returned if and only
if the transaction passes verification checks (see
@@ -158,7 +187,7 @@ class Maker(object):
amount = utxos[utxo]['value']
our_inputs[index] = (script, amount)
- success, msg = self.wallet_service.sign_tx(tx, our_inputs)
+ success, msg = await self.wallet_service.sign_tx(tx, our_inputs)
assert success, msg
for index in our_inputs:
# The second case here is kept for backwards compatibility.
@@ -176,7 +205,7 @@ class Maker(object):
sigmsg = btc.CScript(sig)
else:
jlog.error("Taker has unknown wallet type")
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
sigs.append(base64.b64encode(sigmsg).decode('ascii'))
return (True, sigs)
@@ -267,7 +296,7 @@ class Maker(object):
self.offerlist.remove(oldorder_s[0])
self.offerlist += to_announce
- def freeze_timelocked_utxos(self):
+ async def freeze_timelocked_utxos(self):
"""
Freeze all wallet's timelocked UTXOs. These cannot be spent in a
coinjoin because of protocol limitations.
@@ -276,7 +305,7 @@ class Maker(object):
return
frozen_utxos = []
- md_utxos = self.wallet_service.get_utxos_by_mixdepth()
+ md_utxos = await self.wallet_service.get_utxos_by_mixdepth()
for tx, details \
in md_utxos[self.wallet_service.FIDELITY_BOND_MIXDEPTH].items():
if self.wallet_service.is_timelocked_path(details['path']):
@@ -301,7 +330,7 @@ class Maker(object):
"""
@abc.abstractmethod
- def oid_to_order(self, cjorder, amount):
+ async def oid_to_order(self, cjorder, amount):
"""Must convert an order with an offer/order id
into a set of utxos to fill the order.
Also provides the output addresses for the Taker.
@@ -319,7 +348,7 @@ class Maker(object):
a transaction into a block (e.g. announce orders)
"""
- def get_fidelity_bond_template(self):
+ async def get_fidelity_bond_template(self):
"""
Generates information about a fidelity bond which will be announced
By default returns no fidelity bond
diff --git a/src/jmclient/output.py b/src/jmclient/output.py
index 334b85d..39e0639 100644
--- a/src/jmclient/output.py
+++ b/src/jmclient/output.py
@@ -32,11 +32,11 @@ Are you sure you want to continue?"""
sweep_custom_change_warning = \
"Custom change cannot be set while doing a sweep (zero amount)."
-def fmt_utxos(utxos, wallet_service, prefix=''):
+async def fmt_utxos(utxos, wallet_service, prefix=''):
output = []
for u in utxos:
utxo_str = '{}{} - {}'.format(
- prefix, fmt_utxo(u), fmt_tx_data(utxos[u], wallet_service))
+ prefix, fmt_utxo(u), await fmt_tx_data(utxos[u], wallet_service))
output.append(utxo_str)
return '\n'.join(output)
@@ -45,14 +45,15 @@ def fmt_utxo(utxo):
assert success
return utxostr
-def fmt_tx_data(tx_data, wallet_service):
+async def fmt_tx_data(tx_data, wallet_service):
return 'path: {}, address: {} , value: {}'.format(
wallet_service.get_path_repr(wallet_service.script_to_path(tx_data['script'])),
- wallet_service.script_to_addr(tx_data['script']), tx_data['value'])
+ await wallet_service.script_to_addr(tx_data['script']), tx_data['value'])
-def generate_podle_error_string(priv_utxo_pairs, to, ts, wallet_service, cjamount,
- taker_utxo_age, taker_utxo_amtpercent):
+async def generate_podle_error_string(priv_utxo_pairs, to, ts, wallet_service,
+ cjamount, taker_utxo_age,
+ taker_utxo_amtpercent):
"""Gives detailed error information on why commitment sourcing failed.
"""
errmsg = ""
@@ -93,9 +94,10 @@ def generate_podle_error_string(priv_utxo_pairs, to, ts, wallet_service, cjamoun
"with 'python add-utxo.py --help'\n\n")
errmsg += ("***\nFor reference, here are the utxos in your wallet:\n")
- for md, utxos in wallet_service.get_utxos_by_mixdepth().items():
+ _utxos = await wallet_service.get_utxos_by_mixdepth()
+ for md, utxos in _utxos.items():
if not utxos:
continue
errmsg += ("\nmixdepth {}:\n{}".format(
- md, fmt_utxos(utxos, wallet_service, prefix=' ')))
+ md, await fmt_utxos(utxos, wallet_service, prefix=' ')))
return (errmsgheader, errmsg)
diff --git a/src/jmclient/payjoin.py b/src/jmclient/payjoin.py
index 24db1af..cf22ad5 100644
--- a/src/jmclient/payjoin.py
+++ b/src/jmclient/payjoin.py
@@ -371,7 +371,7 @@ class JMPayjoinManager(object):
else:
self.pj_state = self.JM_PJ_PAYJOIN_BROADCAST_FAILED
- def select_receiver_utxos(self):
+ async def select_receiver_utxos(self):
# Receiver chooses own inputs:
# For earlier ideas about more complex algorithms, see the gist comment here:
# https://gist.github.com/AdamISZ/4551b947789d3216bacfcb7af25e029e#gistcomment-2799709
@@ -389,7 +389,7 @@ class JMPayjoinManager(object):
self.user_info_callback("Choosing one coin at random")
try:
- my_utxos = self.wallet_service.select_utxos(
+ my_utxos = await self.wallet_service.select_utxos(
self.mixdepth, jm_single().DUST_THRESHOLD,
select_fn=select_one_utxo, minconfs=1)
except Exception as e:
@@ -473,7 +473,7 @@ def get_max_additional_fee_contribution(manager):
"contribution of: " + str(max_additional_fee_contribution))
return max_additional_fee_contribution
-def make_payment_psbt(manager, accept_callback=None, info_callback=None):
+async def make_payment_psbt(manager, accept_callback=None, info_callback=None):
""" Creates a valid payment transaction and PSBT for it,
and adds it to the JMPayjoinManager instance passed as argument.
Wallet should already be synced before calling here.
@@ -482,12 +482,12 @@ def make_payment_psbt(manager, accept_callback=None, info_callback=None):
# we can create a standard payment, but have it returned as a PSBT.
assert isinstance(manager, JMPayjoinManager)
assert manager.wallet_service.synced
- payment_psbt = direct_send(manager.wallet_service,
- manager.mixdepth,
- [(str(manager.destination), manager.amount)],
- accept_callback=accept_callback,
- info_callback=info_callback,
- with_final_psbt=True)
+ payment_psbt = await direct_send(
+ manager.wallet_service, manager.mixdepth,
+ [(str(manager.destination), manager.amount)],
+ accept_callback=accept_callback,
+ info_callback=info_callback,
+ with_final_psbt=True)
if not payment_psbt:
return (False, "could not create non-payjoin payment")
@@ -527,7 +527,7 @@ def make_payjoin_request_params(manager):
return params
-def send_payjoin(manager, accept_callback=None,
+async def send_payjoin(manager, accept_callback=None,
info_callback=None, return_deferred=False):
""" Given a JMPayjoinManager object `manager`, initialised with the
payment request data from the server, use its wallet_service to construct
@@ -542,7 +542,8 @@ def send_payjoin(manager, accept_callback=None,
asynchronously) - the `manager` object can be inspected for more detail.
(False, errormsg) in case of failure.
"""
- success, errmsg = make_payment_psbt(manager, accept_callback, info_callback)
+ success, errmsg = await make_payment_psbt(
+ manager, accept_callback, info_callback)
if not success:
return (False, errmsg)
@@ -601,7 +602,7 @@ def process_error_from_server(errormsg, errorcode, manager):
fallback_nonpayjoin_broadcast(errormsg.encode("utf-8"), manager)
return
-def process_payjoin_proposal_from_server(response_body, manager):
+async def process_payjoin_proposal_from_server(response_body, manager):
assert isinstance(manager, JMPayjoinManager)
try:
payjoin_proposal_psbt = \
@@ -621,7 +622,7 @@ def process_payjoin_proposal_from_server(response_body, manager):
payjoin_proposal_psbt.set_utxo(
manager.initial_psbt.inputs[j].utxo, i,
force_witness_utxo=True)
- signresultandpsbt, err = manager.wallet_service.sign_psbt(
+ signresultandpsbt, err = await manager.wallet_service.sign_psbt(
payjoin_proposal_psbt.serialize(), with_sign_result=True)
if err:
log.error("Failed to sign PSBT from the receiver, error: " + err)
@@ -678,7 +679,7 @@ class PayjoinConverter(object):
self.info_callback = info_callback
super().__init__()
- def request_to_psbt(self, payment_psbt_base64, sender_parameters):
+ async def request_to_psbt(self, payment_psbt_base64, sender_parameters):
""" Takes a payment psbt from a sender and their url parameters,
and returns a new payment PSBT proposal, assuming all conditions
are met.
@@ -756,7 +757,7 @@ class PayjoinConverter(object):
fallback_nonpayjoin_broadcast,
b"timeout", self.manager)
- receiver_utxos = self.manager.select_receiver_utxos()
+ receiver_utxos = await self.manager.select_receiver_utxos()
if not receiver_utxos:
return (False, "Could not select coins for payjoin",
"unavailable")
@@ -888,13 +889,14 @@ class PayjoinConverter(object):
log.debug("We created this unsigned tx: ")
log.debug(btc.human_readable_transaction(unsigned_payjoin_tx))
- r_payjoin_psbt = self.wallet_service.create_psbt_from_tx(unsigned_payjoin_tx,
- spent_outs=spent_outs)
+ r_payjoin_psbt = await self.wallet_service.create_psbt_from_tx(
+ unsigned_payjoin_tx, spent_outs=spent_outs)
log.debug("Receiver created payjoin PSBT:\n{}".format(
self.wallet_service.human_readable_psbt(r_payjoin_psbt)))
- signresultandpsbt, err = self.wallet_service.sign_psbt(r_payjoin_psbt.serialize(),
- with_sign_result=True)
+ signresultandpsbt, err = \
+ await self.wallet_service.sign_psbt(
+ r_payjoin_psbt.serialize(), with_sign_result=True)
assert not err, err
signresult, receiver_signed_psbt = signresultandpsbt
assert signresult.num_inputs_final == len(receiver_utxos)
@@ -965,13 +967,16 @@ class JMBIP78ReceiverManager(object):
self.shutdown_callback = shutdown_callback
self.receiving_address = None
self.mode = mode
- self.get_receiving_address()
+
+ async def async_init(self, wallet_service, mixdepth, amount,
+ mode="command-line"):
+ await self.get_receiving_address()
self.manager = JMPayjoinManager(wallet_service, mixdepth,
self.receiving_address, amount,
mode=mode,
user_info_callback=self.info_callback)
- def initiate(self):
+ async def initiate(self):
""" Called at reactor start to start up hidden service
and provide uri string to sender.
"""
@@ -980,7 +985,7 @@ class JMBIP78ReceiverManager(object):
# HTTP request simply doesn't arrive. Note also that the
# "params" argument is None as this is only learnt from request.
factory = BIP78ClientProtocolFactory(self, None,
- self.receive_proposal_from_sender, None,
+ await self.receive_proposal_from_sender, None,
mode="receiver")
h = jm_single().config.get("DAEMON", "daemon_host")
p = jm_single().config.getint("DAEMON", "daemon_port")-2000
@@ -992,21 +997,21 @@ class JMBIP78ReceiverManager(object):
def default_info_callback(self, msg):
jmprint(msg)
- def get_receiving_address(self):
+ async def get_receiving_address(self):
# the receiving address is sourced from the 'next' mixdepth
# to avoid clustering of input and output:
next_mixdepth = (self.mixdepth + 1) % (
self.wallet_service.wallet.mixdepth + 1)
self.receiving_address = btc.CCoinAddress(
- self.wallet_service.get_internal_addr(next_mixdepth))
+ await self.wallet_service.get_internal_addr(next_mixdepth))
- def receive_proposal_from_sender(self, body, params):
+ async def receive_proposal_from_sender(self, body, params):
""" Accepts the contents of the HTTP request from the sender
and returns a payjoin proposal, or an error.
"""
self.pj_converter = PayjoinConverter(self.manager,
self.shutdown, self.info_callback)
- success, a, b = self.pj_converter.request_to_psbt(body, params)
+ success, a, b = await self.pj_converter.request_to_psbt(body, params)
if not success:
return (False, a, b)
else:
diff --git a/src/jmclient/podle.py b/src/jmclient/podle.py
index 1ad5881..be372b8 100644
--- a/src/jmclient/podle.py
+++ b/src/jmclient/podle.py
@@ -9,7 +9,7 @@ from pprint import pformat
from jmbase import jmprint
from jmbitcoin import multiply, add_pubkeys, getG, podle_PublicKey,\
podle_PrivateKey, N, podle_PublicKey_class
-from jmbase import (EXIT_FAILURE, utxostr_to_utxo,
+from jmbase import (EXIT_FAILURE, utxostr_to_utxo, twisted_sys_exit,
utxo_to_utxostr, hextobin, bintohex)
PODLE_COMMIT_FILE = None
@@ -345,7 +345,7 @@ def read_from_podle_file():
#Exit conditions cannot be included in tests.
jmprint("the file: " + PODLE_COMMIT_FILE + " is not valid json.",
"error")
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
if 'used' not in c.keys() or 'external' not in c.keys():
raise PoDLEError("Incorrectly formatted file: " + PODLE_COMMIT_FILE)
diff --git a/src/jmclient/snicker_receiver.py b/src/jmclient/snicker_receiver.py
index ff3d981..c267b5b 100644
--- a/src/jmclient/snicker_receiver.py
+++ b/src/jmclient/snicker_receiver.py
@@ -109,7 +109,7 @@ class SNICKERReceiver(object):
jlog.info("created proposals source file.")
- def default_acceptance_callback(self, our_ins, their_ins,
+ async def default_acceptance_callback(self, our_ins, their_ins,
our_outs, their_outs):
""" Accepts lists of inputs as CTXIns,
a single output (belonging to us) as a CTxOut,
@@ -124,7 +124,7 @@ class SNICKERReceiver(object):
# ours.
# we use get_all* because for these purposes mixdepth
# is irrelevant.
- utxos = self.wallet_service.get_all_utxos()
+ utxos = await self.wallet_service.get_all_utxos()
our_in_amts = []
our_out_amts = []
for i in our_ins:
@@ -149,7 +149,7 @@ class SNICKERReceiver(object):
self.successful_txs.append(tx)
jlog.info(btc.human_readable_transaction(tx))
- def process_proposals(self, proposals):
+ async def process_proposals(self, proposals):
""" This is the "meat" of the SNICKERReceiver service.
It parses proposals and creates and broadcasts transactions
with the wallet, assuming all conditions are met.
@@ -199,7 +199,7 @@ class SNICKERReceiver(object):
jlog.debug("Key not recognized as part of our "
"wallet, ignoring.")
continue
- result = self.wallet_service.parse_proposal_to_signed_tx(
+ result = await self.wallet_service.parse_proposal_to_signed_tx(
addr, p, self.acceptance_callback)
if result[0] is not None:
tx, tweak, out_spk = result
@@ -234,7 +234,7 @@ class SNICKERReceiver(object):
# the coinjoin transaction to the network, which is advisably
# conservative (never possible to have broadcast a tx without
# having already stored the output's key).
- success, msg = self.wallet_service.check_tweak_matches_and_import(
+ success, msg = await self.wallet_service.check_tweak_matches_and_import(
addr, tweak, tweaked_key, source_mixdepth)
if not success:
jlog.error(msg)
diff --git a/src/jmclient/storage.py b/src/jmclient/storage.py
index d685422..72730cc 100644
--- a/src/jmclient/storage.py
+++ b/src/jmclient/storage.py
@@ -365,3 +365,186 @@ class VolatileStorage(Storage):
def get_location(self):
return None
+
+
+class DKGStorage(Storage):
+
+ MAGIC_UNENC = b'JMDKGDAT'
+ MAGIC_ENC = b'JMDKGENC'
+ MAGIC_DETECT_ENC = b'JMDKGDAT'
+
+ @staticmethod
+ def dkg_path(path):
+ return f'{path}.dkg'
+
+
+class DKGRecoveryStorage(object):
+
+ MAGIC_UNENC = b'JMDKGREC'
+
+ def __init__(self, path, create=False, read_only=False):
+ self.path = path
+ self._lock_file = None
+ self._data_checksum = None
+ self.data = None
+ self.read_only = read_only
+ self.newly_created = False
+
+ if not os.path.isfile(path):
+ if create and not read_only:
+ self._create_new()
+ self._save_file()
+ self.newly_created = True
+ else:
+ raise StorageError(f'DKG Recovery File {self.path} not found.')
+ elif create:
+ raise StorageError(f'DKG Recovery File {self.path} '
+ f'already exists.')
+ else:
+ self._load_file()
+
+ assert self.data is not None
+ assert self._data_checksum is not None
+
+ self._create_lock()
+
+ @staticmethod
+ def dkg_recovery_path(path):
+ return f'{path}.dkg_recovery'
+
+ @staticmethod
+ def _serialize(data):
+ return bencoder.bencode(data)
+
+ @staticmethod
+ def _deserialize(data):
+ return bencoder.bdecode(data)
+
+ @staticmethod
+ def _get_lock_filename(path: str) -> str:
+ (path_head, path_tail) = os.path.split(path)
+ return os.path.join(path_head, '.' + path_tail + '.lock')
+
+ @classmethod
+ def verify_lock(cls, path: str):
+ locked_by_pid = cls._get_locking_pid(path)
+ if locked_by_pid >= 0:
+ raise RetryableStorageError(
+ "File is currently in use (locked by pid {}). "
+ "If this is a leftover from a crashed instance "
+ "you need to remove the lock file `{}` manually.".
+ format(locked_by_pid, cls._get_lock_filename(path))
+ )
+
+ @classmethod
+ def _get_locking_pid(cls, path: str) -> int:
+ """Return locking PID, -1 if no lockfile if found, 0 if PID cannot be read."""
+ try:
+ with open(cls._get_lock_filename(path), 'r') as f:
+ return int(f.read())
+ except FileNotFoundError:
+ return -1
+ except ValueError:
+ return 0
+
+ @classmethod
+ def is_storage_file(cls, path):
+ return cls._get_file_magic(path) == cls.MAGIC_UNENC
+
+ @classmethod
+ def _get_file_magic(cls, path):
+ with open(path, 'rb') as fh:
+ return fh.read(len(cls.MAGIC_UNENC))
+
+ def _create_new(self):
+ self.data = {}
+
+ def _save_file(self):
+ assert self.read_only == False
+ data = self._serialize(self.data)
+ magic = self.MAGIC_UNENC
+ self._write_file(magic + data)
+ self._update_data_hash()
+
+ def _write_file(self, data):
+ assert self.read_only is False
+ if not os.path.exists(self.path):
+ # newly created storage
+ with open(self.path, 'wb') as fh:
+ fh.write(data)
+ return
+
+ # using a tmpfile ensures the write is atomic
+ tmpfile = '{}.tmp'.format(self.path)
+
+ with open(tmpfile, 'wb') as fh:
+ shutil.copystat(self.path, tmpfile)
+ fh.write(data)
+
+ #FIXME: behaviour with symlinks might be weird
+ shutil.move(tmpfile, self.path)
+
+ def _update_data_hash(self):
+ self._data_checksum = self._get_data_checksum()
+
+ def _get_data_checksum(self):
+ if self.data is None: #pragma: no cover
+ return None
+ return sha256(self._serialize(self.data)).digest()
+
+ def _create_lock(self):
+ if not self.read_only:
+ self._lock_file = self._get_lock_filename(self.path)
+ try:
+ with open(self._lock_file, 'x') as f:
+ f.write(str(os.getpid()))
+ except FileExistsError:
+ self._lock_file = None
+ self.verify_lock(self.path)
+
+ atexit.register(self.close)
+
+ def was_changed(self):
+ return self._data_checksum != self._get_data_checksum()
+
+ def save(self):
+ if self.read_only:
+ raise StorageError("Read-only recovery storage cannot be saved.")
+ self._save_file()
+
+ def _load_file(self):
+ data = self._read_file()
+ assert len(self.MAGIC_UNENC) == 8
+ magic = data[:8]
+
+ if magic != self.MAGIC_UNENC:
+ raise StorageError("File does not appear to be "
+ "a DKG Recovery File.")
+
+ data = data[8:]
+ self.data = self._deserialize(data)
+ self._update_data_hash()
+
+ def _read_file(self):
+ # this method mainly exists for easier mocking
+ with open(self.path, 'rb') as fh:
+ return fh.read()
+
+ def get_location(self):
+ return self.path
+
+ def _remove_lock(self):
+ if self._lock_file is not None:
+ try:
+ os.remove(self._lock_file)
+ except FileNotFoundError:
+ pass
+
+ def close(self):
+ if not self.read_only and self.was_changed():
+ self._save_file()
+ self._remove_lock()
+ self.read_only = True
+
+ def __del__(self):
+ self.close()
diff --git a/src/jmclient/taker.py b/src/jmclient/taker.py
index 998e83f..a4f446a 100644
--- a/src/jmclient/taker.py
+++ b/src/jmclient/taker.py
@@ -1,9 +1,13 @@
#! /usr/bin/env python
+import asyncio
import base64
+import hashlib
import pprint
import random
from typing import Any, NamedTuple, Optional
+
+from bitcointx.core.key import XOnlyPubKey
from twisted.internet import reactor, task
import jmbitcoin as btc
@@ -13,7 +17,7 @@ from jmclient.support import (calc_cj_fee, fidelity_bond_weighted_order_choose,
choose_sweep_orders)
from jmclient.wallet import (estimate_tx_fee, compute_tx_locktime,
FidelityBondMixin, UnknownAddressForLabel,
- TaprootWallet)
+ TaprootWallet, FrostWallet)
from jmclient.podle import generate_podle, get_podle_commitments
from jmclient.wallet_service import WalletService
from jmclient.fidelity_bond import FidelityBondProof
@@ -171,19 +175,27 @@ class Taker(object):
return
self.honest_only = truefalse
- def initialize(self, orderbook, fidelity_bonds_info):
+ async def initialize(self, orderbook, fidelity_bonds_info):
"""Once the daemon is active and has returned the current orderbook,
select offers, re-initialize variables and prepare a commitment,
then send it to the protocol to fill offers.
"""
if self.aborted:
return (False,)
- self.taker_info_callback("INFO", "Received offers from joinmarket pit")
+ info_cb_res = self.taker_info_callback(
+ "INFO", "Received offers from joinmarket pit")
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
#choose the next item in the schedule
self.schedule_index += 1
if self.schedule_index == len(self.schedule):
- self.taker_info_callback("INFO", "Finished all scheduled transactions")
- self.on_finished_callback(True)
+ info_cb_res = self.taker_info_callback(
+ "INFO", "Finished all scheduled transactions")
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
+ finished_cb_res = self.on_finished_callback(True)
+ if asyncio.iscoroutine(self.on_finished_callback):
+ await finished_cb_res
return (False,)
else:
#read the settings from the schedule entry
@@ -224,7 +236,8 @@ class Taker(object):
self.wallet_service.mixdepth + 1)
jlog.info("Choosing a destination from mixdepth: " + str(
next_mixdepth))
- self.my_cj_addr = self.wallet_service.get_internal_addr(next_mixdepth)
+ self.my_cj_addr = await self.wallet_service.get_internal_addr(
+ next_mixdepth)
jlog.info("Chose destination address: " + self.my_cj_addr)
self.outputs = []
self.cjfee_total = 0
@@ -238,14 +251,17 @@ class Taker(object):
offer["fidelity_bond_value"] = fidelity_bond_values.get(offer["counterparty"], 0)
sweep = True if self.cjamount == 0 else False
- if not self.filter_orderbook(orderbook, sweep):
+ if not await self.filter_orderbook(orderbook, sweep):
return (False,)
#choose coins to spend
- self.taker_info_callback("INFO", "Preparing bitcoin data..")
- if not self.prepare_my_bitcoin_data():
+ info_cb_res = self.taker_info_callback(
+ "INFO", "Preparing bitcoin data..")
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
+ if not await self.prepare_my_bitcoin_data():
return (False,)
#Prepare a commitment
- commitment, revelation, errmsg = self.make_commitment()
+ commitment, revelation, errmsg = await self.make_commitment()
if not commitment:
utxo_pairs, to, ts = revelation
if len(to) == 0:
@@ -253,20 +269,26 @@ class Taker(object):
#until they get old enough; otherwise, we have to abort
#(TODO, it's possible for user to dynamically add more coins,
#consider if this option means we should stay alive).
- self.taker_info_callback("ABORT", errmsg)
+ info_cb_res = self.taker_info_callback("ABORT", errmsg)
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
return ("commitment-failure",)
else:
- self.taker_info_callback("INFO", errmsg)
+ info_cb_res = self.taker_info_callback("INFO", errmsg)
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
return (False,)
else:
- self.taker_info_callback("INFO", errmsg)
+ info_cb_res = self.taker_info_callback("INFO", errmsg)
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
#Initialization has been successful. We must set the nonrespondants
#now to keep track of what changed when we receive the utxo data
self.nonrespondants = list(self.orderbook.keys())
return (True, self.cjamount, commitment, revelation, self.orderbook)
- def filter_orderbook(self, orderbook, sweep=False):
+ async def filter_orderbook(self, orderbook, sweep=False):
#If honesty filter is set, we immediately filter to only the prescribed
#honest makers before continuing. In this case, the number of
#counterparties should already match, and this has to be set by the
@@ -303,6 +325,8 @@ class Taker(object):
accepted = self.filter_orders_callback([self.orderbook,
self.total_cj_fee],
self.cjamount)
+ if asyncio.iscoroutine(self.filter_orders_callback):
+ accepted = await accepted
if accepted == "retry":
#Special condition if Taker is "determined to continue"
#(such as tumbler); even though these offers are rejected,
@@ -313,7 +337,7 @@ class Taker(object):
return False
return True
- def prepare_my_bitcoin_data(self):
+ async def prepare_my_bitcoin_data(self):
"""Get a coinjoin address and a change address; prepare inputs
appropriate for this transaction"""
if not self.my_cj_addr:
@@ -324,7 +348,9 @@ class Taker(object):
self.my_change_addr = self.custom_change_address
else:
try:
- self.my_change_addr = self.wallet_service.get_internal_addr(self.mixdepth)
+ self.my_change_addr = \
+ await self.wallet_service.get_internal_addr(
+ self.mixdepth)
if self.change_label:
try:
self.wallet_service.set_address_label(
@@ -333,7 +359,10 @@ class Taker(object):
# ignore, will happen with custom change not part of a wallet
pass
except:
- self.taker_info_callback("ABORT", "Failed to get a change address")
+ info_cb_res = self.taker_info_callback(
+ "ABORT", "Failed to get a change address")
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
return False
#adjust the required amount upwards to anticipate an increase in
#transaction fees after re-estimation; this is sufficiently conservative
@@ -347,15 +376,19 @@ class Taker(object):
total_amount = self.cjamount + self.total_cj_fee + self.total_txfee
jlog.info('total estimated amount spent = ' + btc.amount_to_str(total_amount))
try:
- self.input_utxos = self.wallet_service.select_utxos(self.mixdepth, total_amount,
- minconfs=1)
+ self.input_utxos = await self.wallet_service.select_utxos(
+ self.mixdepth, total_amount, minconfs=1)
except Exception as e:
- self.taker_info_callback("ABORT",
- "Unable to select sufficient coins: " + repr(e))
+ info_cb_res = self.taker_info_callback(
+ "ABORT", "Unable to select sufficient coins: " + repr(e))
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
return False
else:
#sweep
- self.input_utxos = self.wallet_service.get_utxos_by_mixdepth()[self.mixdepth]
+ ws = self.wallet_service
+ _utxos = await ws.get_utxos_by_mixdepth()
+ self.input_utxos = _utxos[self.mixdepth]
self.my_change_addr = None
#do our best to estimate the fee based on the number of
#our own utxos; this estimate may be significantly higher
@@ -388,20 +421,25 @@ class Taker(object):
self.ignored_makers, allowed_types=allowed_types,
max_cj_fee=self.max_cj_fee)
if not self.orderbook:
- self.taker_info_callback("ABORT",
- "Could not find orders to complete transaction")
+ info_cb_res = self.taker_info_callback(
+ "ABORT", "Could not find orders to complete transaction")
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
return False
if self.filter_orders_callback:
- if not self.filter_orders_callback((self.orderbook,
- self.total_cj_fee),
- self.cjamount):
+ accepted = self.filter_orders_callback((self.orderbook,
+ self.total_cj_fee),
+ self.cjamount)
+ if asyncio.iscoroutine(self.filter_orders_callback):
+ accepted = await accepted
+ if not accepted:
return False
self.utxos = {None: list(self.input_utxos.keys())}
return True
@hexbin
- def receive_utxos(self, ioauth_data):
+ async def receive_utxos(self, ioauth_data):
"""Triggered when the daemon returns utxo data from
makers who responded; this is the completion of phase 1
of the protocol
@@ -442,11 +480,17 @@ class Taker(object):
#know for sure that the data meets all business-logic requirements.
if len(self.maker_utxo_data) < jm_single().config.getint(
"POLICY", "minimum_makers"):
- self.taker_info_callback("INFO", "Not enough counterparties, aborting.")
+ info_cb_res = self.taker_info_callback(
+ "INFO", "Not enough counterparties, aborting.")
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
return (False,
"Not enough counterparties responded to fill, giving up")
- self.taker_info_callback("INFO", "Got all parts, enough to build a tx")
+ info_cb_res = self.taker_info_callback(
+ "INFO", "Got all parts, enough to build a tx")
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
#The list self.nonrespondants is now reset and
#used to track return of signatures for phase 2
@@ -537,7 +581,10 @@ class Taker(object):
jlog.info('obtained tx\n' + btc.human_readable_transaction(
self.latest_tx))
- self.taker_info_callback("INFO", "Built tx, sending to counterparties.")
+ info_cb_res = self.taker_info_callback(
+ "INFO", "Built tx, sending to counterparties.")
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
return (True, list(self.maker_utxo_data.keys()),
self.latest_tx.serialize())
@@ -591,9 +638,14 @@ class Taker(object):
f"maker's ({nick}) proposed utxo is not confirmed, "
"rejecting."])
try:
- if self.wallet_service.pubkey_has_script(
- auth_pub, inp['script']):
- break
+ if isinstance(self.wallet_service.wallet, TaprootWallet):
+ if self.wallet_service.output_pubkey_has_script(
+ auth_pub, inp['script']):
+ break
+ else:
+ if self.wallet_service.pubkey_has_script(
+ auth_pub, inp['script']):
+ break
except EngineError as e:
pass
else:
@@ -627,17 +679,33 @@ class Taker(object):
with an ecdsa verification.
"""
try:
+ wallet = self.wallet_service.wallet
# maker pubkey as message is in hex format:
- if not btc.ecdsa_verify(bintohex(maker_pk), btc_sig, auth_pub):
- jlog.debug('signature didnt match pubkey and message')
- return False
+ if isinstance(wallet, (TaprootWallet, FrostWallet)):
+ pubkey = XOnlyPubKey(auth_pub)
+ kphex_hash = hashlib.sha256(
+ bintohex(maker_pk).encode()).digest()
+ btc_sig = base64.b64decode(btc_sig)
+ if not pubkey.verify_schnorr(kphex_hash, btc_sig):
+ jlog.debug('schnorr signature didnt match '
+ 'pubkey and message')
+ return False
+ else:
+ if not btc.ecdsa_verify(bintohex(maker_pk), btc_sig, auth_pub):
+ jlog.debug('signature didnt match pubkey and message')
+ return False
except Exception as e:
- jlog.info("Failed ecdsa verify for maker pubkey: " + bintohex(maker_pk))
+ if isinstance(wallet, (TaprootWallet, FrostWallet)):
+ jlog.info("Failed schnorr verify for maker pubkey: " +
+ bintohex(maker_pk))
+ else:
+ jlog.info("Failed ecdsa verify for maker pubkey: " +
+ bintohex(maker_pk))
jlog.info("Exception was: " + repr(e))
return False
return True
- def on_sig(self, nick, sigb64):
+ async def on_sig(self, nick, sigb64):
"""Processes transaction signatures from counterparties.
If all signatures received correctly, returns the result
of self.self_sign_and_push() (i.e. we complete the signing
@@ -728,7 +796,7 @@ class Taker(object):
spent_outputs = None
wallet = self.wallet_service.wallet
- if isinstance(wallet, TaprootWallet):
+ if isinstance(wallet, (TaprootWallet, FrostWallet)):
spent_outputs = wallet.get_spent_outputs(self.latest_tx)
sig_good = btc.verify_tx_input(self.latest_tx, u[0], scriptSig,
scriptPubKey, amount=ver_amt, witness=witness,
@@ -776,11 +844,14 @@ class Taker(object):
return False
assert not len(self.nonrespondants)
jlog.info('all makers have sent their signatures')
- self.taker_info_callback("INFO", "Transaction is valid, signing..")
+ info_cb_res = self.taker_info_callback(
+ "INFO", "Transaction is valid, signing..")
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
jlog.debug("schedule item was: " + str(self.schedule[self.schedule_index]))
- return self.self_sign_and_push()
+ return await self.self_sign_and_push()
- def make_commitment(self):
+ async def make_commitment(self):
"""The Taker default commitment function, which uses PoDLE.
Alternative commitment types should use a different commit type byte.
This will allow future upgrades to provide different style commitments
@@ -811,7 +882,7 @@ class Taker(object):
newresults.append(utxos[i])
return newresults, too_new, too_small
- def priv_utxo_pairs_from_utxos(utxos, age, amt):
+ async def priv_utxo_pairs_from_utxos(utxos, age, amt):
#returns pairs list of (priv, utxo) for each valid utxo;
#also returns lists "too_new" and "too_small" for any
#utxos that did not satisfy the criteria for debugging.
@@ -822,9 +893,10 @@ class Taker(object):
for k, v in new_utxos_dict.items():
# filter out any non-standard utxos:
path = self.wallet_service.script_to_path(v["script"])
- if not self.wallet_service.is_standard_wallet_script(path):
+ ws = self.wallet_service
+ if not await ws.is_standard_wallet_script(path):
continue
- addr = self.wallet_service.script_to_addr(v["script"])
+ addr = await self.wallet_service.script_to_addr(v["script"])
priv = self.wallet_service.get_key_from_addr(addr)
if priv: #can be null from create-unsigned
priv_utxo_pairs.append((priv, k))
@@ -837,8 +909,8 @@ class Taker(object):
amt = int(self.cjamount *
jm_single().config.getint("POLICY",
"taker_utxo_amtpercent") / 100.0)
- priv_utxo_pairs, to, ts = priv_utxo_pairs_from_utxos(self.input_utxos,
- age, amt)
+ priv_utxo_pairs, to, ts = await priv_utxo_pairs_from_utxos(
+ self.input_utxos, age, amt)
#For podle data format see: podle.PoDLE.reveal()
#In first round try, don't use external commitments
@@ -857,12 +929,15 @@ class Taker(object):
#in the transaction, about to be consumed, rather than use
#random utxos that will persist after. At this step we also
#allow use of external utxos in the json file.
- mixdepth_utxos = self.wallet_service.get_utxos_by_mixdepth()[self.mixdepth]
+ ws = self.wallet_service
+ _utxos = await ws.get_utxos_by_mixdepth()
+ mixdepth_utxos = _utxos[self.mixdepth]
if len(self.input_utxos) == len(mixdepth_utxos):
# Already tried the whole mixdepth
podle_data = generate_podle([], tries, ext_valid)
else:
- priv_utxo_pairs, to, ts = priv_utxo_pairs_from_utxos(mixdepth_utxos, age, amt)
+ priv_utxo_pairs, to, ts = await priv_utxo_pairs_from_utxos(
+ mixdepth_utxos, age, amt)
podle_data = generate_podle(priv_utxo_pairs, tries, ext_valid)
if podle_data:
jlog.debug("Generated PoDLE: " + repr(podle_data))
@@ -870,7 +945,8 @@ class Taker(object):
podle_data.serialize_revelation(),
"Commitment sourced OK")
else:
- errmsgheader, errmsg = generate_podle_error_string(priv_utxo_pairs,
+ errmsgheader, errmsg = await generate_podle_error_string(
+ priv_utxo_pairs,
to, ts, self.wallet_service, self.cjamount,
jm_single().config.get("POLICY", "taker_utxo_age"),
jm_single().config.get("POLICY", "taker_utxo_amtpercent"))
@@ -890,7 +966,7 @@ class Taker(object):
#Note: donation code removed (possibly temporarily)
raise NotImplementedError
- def self_sign(self):
+ async def self_sign(self):
# now sign it ourselves
our_inputs = {}
for index, ins in enumerate(self.latest_tx.vin):
@@ -901,11 +977,12 @@ class Taker(object):
script = self.input_utxos[utxo]['script']
amount = self.input_utxos[utxo]['value']
our_inputs[index] = (script, amount)
- success, msg = self.wallet_service.sign_tx(self.latest_tx, our_inputs)
+ success, msg = await self.wallet_service.sign_tx(
+ self.latest_tx, our_inputs)
if not success:
jlog.error("Failed to sign transaction: " + msg)
- def handle_unbroadcast_transaction(self, txid, tx):
+ async def handle_unbroadcast_transaction(self, txid, tx):
""" The wallet service will handle dangling
callbacks for transactions but we want to reattempt
broadcast in case the cause of the problem is a
@@ -922,7 +999,10 @@ class Taker(object):
if jm_single().config.get('POLICY', 'tx_broadcast') == "not-self":
warnmsg = ("You have chosen not to broadcast from your own "
"node. The transaction is NOT broadcast.")
- self.taker_info_callback("ABORT", warnmsg + "\nSee log for details.")
+ info_cb_res = self.taker_info_callback(
+ "ABORT", warnmsg + "\nSee log for details.")
+ if asyncio.iscoroutine(self.taker_info_callback):
+ await info_cb_res
# warning is arguably not correct but it will stand out more:
jlog.warn(warnmsg)
jlog.info(btc.human_readable_transaction(tx))
@@ -934,7 +1014,7 @@ class Taker(object):
def push_ourselves(self):
return jm_single().bc_interface.pushtx(self.latest_tx.serialize())
- def push(self):
+ async def push(self):
jlog.debug('\n' + bintohex(self.latest_tx.serialize()))
self.txid = bintohex(self.latest_tx.GetTxid()[::-1])
jlog.info('txid = ' + self.txid)
@@ -955,7 +1035,8 @@ class Taker(object):
task.deferLater(reactor,
float(jm_single().config.getint(
"TIMEOUT", "unconfirm_timeout_sec")),
- self.handle_unbroadcast_transaction, self.txid, self.latest_tx)
+ self.handle_unbroadcast_transaction,
+ self.txid, self.latest_tx)
tx_broadcast = jm_single().config.get('POLICY', 'tx_broadcast')
nick_to_use = None
@@ -977,15 +1058,17 @@ class Taker(object):
"methods supported. Reverting to self-broadcast.")
pushed = self.push_ourselves()
if not pushed:
- self.on_finished_callback(False, fromtx=True)
+ finished_cb_res = self.on_finished_callback(False, fromtx=True)
+ if asyncio.iscoroutine(self.on_finished_callback):
+ await finished_cb_res
else:
if nick_to_use:
return (nick_to_use, self.latest_tx.serialize())
#if push was not successful, return None
- def self_sign_and_push(self):
- self.self_sign()
- return self.push()
+ async def self_sign_and_push(self):
+ await self.self_sign()
+ return await self.push()
def tx_match(self, txd):
# Takers process only in series, so this should not occur:
@@ -995,12 +1078,14 @@ class Taker(object):
return False
return True
- def unconfirm_callback(self, txd, txid):
+ async def unconfirm_callback(self, txd, txid):
if not self.tx_match(txd):
return False
jlog.info("Transaction seen on network, waiting for confirmation")
#To allow client to mark transaction as "done" (e.g. by persisting state)
- self.on_finished_callback(True, fromtx="unconfirmed")
+ finished_cb_res= self.on_finished_callback(True, fromtx="unconfirmed")
+ if asyncio.iscoroutine(self.on_finished_callback):
+ await finished_cb_res
self.waiting_for_conf = True
confirm_timeout_sec = float(jm_single().config.get(
"TIMEOUT", "confirm_timeout_hours")) * 3600
@@ -1010,7 +1095,7 @@ class Taker(object):
"transaction with txid " + str(txid) + " not confirmed.")
return True
- def confirm_callback(self, txd, txid, confirmations):
+ async def confirm_callback(self, txd, txid, confirmations):
if not self.tx_match(txd):
return False
self.waiting_for_conf = False
@@ -1022,14 +1107,17 @@ class Taker(object):
jlog.debug("Confirmed callback in taker, confs: " + str(confirmations))
fromtx=False if self.schedule_index + 1 == len(self.schedule) else True
waittime = self.schedule[self.schedule_index][4]
- self.on_finished_callback(True, fromtx=fromtx, waittime=waittime,
- txdetails=(txd, txid))
+ finished_cb_res = self.on_finished_callback(
+ True, fromtx=fromtx, waittime=waittime, txdetails=(txd, txid))
+ if asyncio.iscoroutine(self.on_finished_callback):
+ await finished_cb_res
return True
def _is_our_input(self, tx_input):
utxo = (tx_input.prevout.hash[::-1], tx_input.prevout.n)
return utxo in self.input_utxos
+
def round_to_significant_figures(d, sf):
'''Rounding number d to sf significant figures in base 10'''
for p in range(-10, 15):
diff --git a/src/jmclient/taker_utils.py b/src/jmclient/taker_utils.py
index f43b763..16d71d3 100644
--- a/src/jmclient/taker_utils.py
+++ b/src/jmclient/taker_utils.py
@@ -17,7 +17,7 @@ from .wallet_service import WalletService
from jmbitcoin import make_shuffled_tx, amount_to_str, \
PartiallySignedTransaction, CMutableTxOut,\
human_readable_transaction
-from jmbase.support import EXIT_SUCCESS
+from jmbase.support import EXIT_SUCCESS, twisted_sys_exit
log = get_log()
"""
@@ -34,7 +34,7 @@ def get_utxo_scripts(wallet: BaseWallet, utxos: dict) -> list:
script_types.append(wallet.get_outtype(utxo["address"]))
return script_types
-def direct_send(wallet_service: WalletService,
+async def direct_send(wallet_service: WalletService,
mixdepth: int,
dest_and_amounts: List[Tuple[str, int]],
answeryes: bool = False,
@@ -128,7 +128,8 @@ def direct_send(wallet_service: WalletService,
#doing a sweep
destination = dest_and_amounts[0][0]
amount = dest_and_amounts[0][1]
- utxos = wallet_service.get_utxos_by_mixdepth()[mixdepth]
+ _utxos = await wallet_service.get_utxos_by_mixdepth()
+ utxos = _utxos[mixdepth]
if utxos == {}:
log.error(
f"There are no available utxos in mixdepth {mixdepth}, "
@@ -162,8 +163,8 @@ def direct_send(wallet_service: WalletService,
# of non-standard input types at this point.
initial_fee_est = estimate_tx_fee(8, len(dest_and_amounts) + 1,
txtype=txtype, outtype=outtypes)
- utxos = wallet_service.select_utxos(mixdepth, amount + initial_fee_est,
- includeaddr=True)
+ utxos = await wallet_service.select_utxos(
+ mixdepth, amount + initial_fee_est, includeaddr=True)
script_types = get_utxo_scripts(wallet_service.wallet, utxos)
if len(utxos) < 8:
fee_est = estimate_tx_fee(len(utxos), len(dest_and_amounts) + 1,
@@ -175,7 +176,7 @@ def direct_send(wallet_service: WalletService,
outs = []
for out in dest_and_amounts:
outs.append({"value": out[1], "address": out[0]})
- change_addr = wallet_service.get_internal_addr(mixdepth) \
+ change_addr = await wallet_service.get_internal_addr(mixdepth) \
if custom_change_addr is None else custom_change_addr
outs.append({"value": changeval, "address": change_addr})
@@ -215,8 +216,10 @@ def direct_send(wallet_service: WalletService,
if with_final_psbt:
# here we have the PSBTWalletMixin do the signing stage
# for us:
- new_psbt = wallet_service.create_psbt_from_tx(tx, spent_outs=spent_outs)
- serialized_psbt, err = wallet_service.sign_psbt(new_psbt.serialize())
+ new_psbt = await wallet_service.create_psbt_from_tx(
+ tx, spent_outs=spent_outs)
+ serialized_psbt, err = await wallet_service.sign_psbt(
+ new_psbt.serialize())
if err:
log.error("Failed to sign PSBT, quitting. Error message: " + err)
return False
@@ -225,7 +228,7 @@ def direct_send(wallet_service: WalletService,
print(wallet_service.human_readable_psbt(new_psbt_signed))
return new_psbt_signed
else:
- success, msg = wallet_service.sign_tx(tx, inscripts)
+ success, msg = await wallet_service.sign_tx(tx, inscripts)
if not success:
log.error("Failed to sign transaction, quitting. Error msg: " + msg)
return
@@ -305,7 +308,7 @@ def restart_wait(txid):
return False
if res["confirmations"] < 0:
log.warn("Tx: " + txid + " has a conflict, abandoning.")
- sys.exit(EXIT_SUCCESS)
+ twisted_sys_exit(EXIT_SUCCESS)
else:
log.debug("Tx: " + str(txid) + " has " + str(
res["confirmations"]) + " confirmations.")
diff --git a/src/jmclient/wallet.py b/src/jmclient/wallet.py
index 072be3d..411f591 100644
--- a/src/jmclient/wallet.py
+++ b/src/jmclient/wallet.py
@@ -1,4 +1,5 @@
+import asyncio
from configparser import NoOptionError
import warnings
import functools
@@ -21,7 +22,6 @@ from numbers import Integral
from math import exp
from typing import Any, Dict, Optional, Tuple
-
from .configure import jm_single
from .blockchaininterface import INF_HEIGHT
from .support import select_gradual, select_greedy, select_greediest, \
@@ -31,10 +31,18 @@ from .cryptoengine import TYPE_P2PKH, TYPE_P2SH_P2WPKH, TYPE_P2WSH,\
TYPE_WATCHONLY_FIDELITY_BONDS, TYPE_WATCHONLY_TIMELOCK_P2WSH, \
TYPE_WATCHONLY_P2WPKH, TYPE_P2TR, TYPE_P2TR_FROST, ENGINES, \
detect_script_type, EngineError
+from .storage import DKGRecoveryStorage
from .support import get_random_bytes
from . import mn_encode, mn_decode
import jmbitcoin as btc
-from jmbase import JM_WALLET_NAME_PREFIX, bintohex, hextobin
+from jmbase import JM_WALLET_NAME_PREFIX, bintohex, hextobin, jmprint, get_log
+from jmfrost.chilldkg_ref import chilldkg
+from .frost_clients import (chilldkg_hexlify, decrypt_ext_recovery,
+ deserialize_ext_recovery, serialize_ext_recovery)
+from jmfrost.chilldkg_ref.chilldkg import hostpubkey_gen
+
+
+jlog = get_log()
def _int_to_bytestr(i):
@@ -284,8 +292,8 @@ class UTXOManager(object):
def enable_utxo(self, txid, index):
self.disable_utxo(txid, index, disable=False)
- def select_utxos(self, mixdepth, amount, utxo_filter=(), select_fn=None,
- maxheight=None):
+ async def select_utxos(self, mixdepth, amount, utxo_filter=(),
+ select_fn=None, maxheight=None):
assert isinstance(mixdepth, numbers.Integral)
utxos = self._utxo[mixdepth]
# do not select anything in the filter
@@ -324,7 +332,7 @@ class UTXOManager(object):
) if v[2] <= maxheight}
return sum(x[1] for x in utxomap.values())
- def get_utxos_at_mixdepth(self, mixdepth: int) -> \
+ async def get_utxos_at_mixdepth(self, mixdepth: int) -> \
Dict[Tuple[bytes, int], Tuple[Tuple, int, int]]:
utxomap = self._utxo.get(mixdepth)
return deepcopy(utxomap) if utxomap else {}
@@ -371,6 +379,352 @@ class AddressLabelsManager(object):
del self._addr_labels[address]
+class DKGManager:
+
+ STORAGE_KEY = b'dkg'
+ SECSHARE_SUBKEY = b'secshare'
+ PUBSHARES_SUBKEY = b'pubshares'
+ PUBKEY_SUBKEY = b'pubkey'
+ HOSTPUBKEYS_SUBKEY = b'hostpubkeys'
+ T_SUBKEY = b't'
+ SESSIONS_SUBKEY = b'sessions'
+ RECOVERY_STORAGE_KEY = b'dkg'
+
+ def __init__(self, wallet, dkg_storage, recovery_storage):
+ self.wallet = wallet
+ self.dkg_storage = dkg_storage
+ self.recovery_storage = recovery_storage
+ self._dkg_secshare = None
+ self._dkg_pubshares = None
+ self._dkg_pubkey = None
+ self._dkg_hostpubkeys = None
+ self._dkg_t = None
+ self._dkg_sessions = None
+ self._load_storage()
+ assert self._dkg_secshare is not None
+ assert self._dkg_pubshares is not None
+ assert self._dkg_pubkey is not None
+ assert self._dkg_hostpubkeys is not None
+ assert self._dkg_t is not None
+ assert self._dkg_sessions is not None
+
+ @classmethod
+ def initialize(cls, dkg_storage, recovery_storage):
+ dkg_storage.data[cls.STORAGE_KEY] = {
+ cls.SECSHARE_SUBKEY: {},
+ cls.PUBSHARES_SUBKEY: {},
+ cls.PUBKEY_SUBKEY: {},
+ cls.HOSTPUBKEYS_SUBKEY: {},
+ cls.T_SUBKEY: {},
+ cls.SESSIONS_SUBKEY: {},
+ }
+ recovery_storage.data[cls.RECOVERY_STORAGE_KEY] = dict()
+
+ def _load_storage(self):
+ data = self.dkg_storage.data
+ assert isinstance(data[self.STORAGE_KEY], dict)
+
+ assert self.SECSHARE_SUBKEY in data[self.STORAGE_KEY]
+ assert isinstance(data[self.STORAGE_KEY][self.SECSHARE_SUBKEY], dict)
+
+ assert self.PUBSHARES_SUBKEY in data[self.STORAGE_KEY]
+ assert isinstance(data[self.STORAGE_KEY][self.PUBSHARES_SUBKEY], dict)
+
+ assert self.PUBKEY_SUBKEY in data[self.STORAGE_KEY]
+ assert isinstance(data[self.STORAGE_KEY][self.PUBKEY_SUBKEY], dict)
+
+ assert self.HOSTPUBKEYS_SUBKEY in data[self.STORAGE_KEY]
+ assert isinstance(data[self.STORAGE_KEY][self.HOSTPUBKEYS_SUBKEY],
+ dict)
+
+ assert self.T_SUBKEY in data[self.STORAGE_KEY]
+ assert isinstance(data[self.STORAGE_KEY][self.T_SUBKEY], dict)
+
+ assert self.SESSIONS_SUBKEY in data[self.STORAGE_KEY]
+ assert isinstance(data[self.STORAGE_KEY][self.SESSIONS_SUBKEY], dict)
+
+ self._dkg_secshare = collections.defaultdict(dict)
+ for c, secshare in (
+ data[self.STORAGE_KEY][self.SECSHARE_SUBKEY].items()):
+ self._dkg_secshare[c] = secshare
+
+ self._dkg_pubshares = collections.defaultdict(dict)
+ for c, pubshares in (
+ data[self.STORAGE_KEY][self.PUBSHARES_SUBKEY].items()):
+ self._dkg_pubshares[c] = pubshares
+
+ self._dkg_pubkey = collections.defaultdict(dict)
+ for c, pubkey in (
+ data[self.STORAGE_KEY][self.PUBKEY_SUBKEY].items()):
+ self._dkg_pubkey[c] = pubkey
+
+ self._dkg_hostpubkeys = collections.defaultdict(dict)
+ for c, hostpubkeys in (
+ data[self.STORAGE_KEY][self.HOSTPUBKEYS_SUBKEY].items()):
+ self._dkg_hostpubkeys[c] = hostpubkeys
+
+ self._dkg_t = collections.defaultdict(dict)
+ for c, t in (
+ data[self.STORAGE_KEY][self.T_SUBKEY].items()):
+ self._dkg_t[c] = t
+
+ self._dkg_sessions = collections.defaultdict(dict)
+ for ser_md_type_idx, session_id in (
+ data[self.STORAGE_KEY][self.SESSIONS_SUBKEY].items()):
+ md_type_idx = deserialize_ext_recovery(ser_md_type_idx)
+ self._dkg_sessions[md_type_idx] = session_id
+
+ rec_dkg_data = self.recovery_storage.data[self.RECOVERY_STORAGE_KEY]
+ assert isinstance(rec_dkg_data, dict)
+ for session_id, dkg_data_tuple in rec_dkg_data.items():
+ assert isinstance(dkg_data_tuple, list)
+ assert len(dkg_data_tuple) == 2
+
+ def save(self, write=True):
+ new_data = {
+ self.SECSHARE_SUBKEY: {},
+ self.PUBSHARES_SUBKEY: {},
+ self.PUBKEY_SUBKEY: {},
+ self.HOSTPUBKEYS_SUBKEY: {},
+ self.T_SUBKEY: {},
+ self.SESSIONS_SUBKEY: {},
+ }
+ self.dkg_storage.data[self.STORAGE_KEY] = new_data
+
+ for c, secshare in self._dkg_secshare.items():
+ new_data[self.SECSHARE_SUBKEY][c] = secshare
+
+ for c, pubshares in self._dkg_pubshares.items():
+ new_data[self.PUBSHARES_SUBKEY][c] = pubshares
+
+ for c, pubkey in self._dkg_pubkey.items():
+ new_data[self.PUBKEY_SUBKEY][c] = pubkey
+
+ for c, hostpubkeys in self._dkg_hostpubkeys.items():
+ new_data[self.HOSTPUBKEYS_SUBKEY][c] = hostpubkeys
+
+ for c, t in self._dkg_t.items():
+ new_data[self.T_SUBKEY][c] = t
+
+ for md_type_idx, session_id in self._dkg_sessions.items():
+ ser_md_type_idx = serialize_ext_recovery(*md_type_idx)
+ new_data[self.SESSIONS_SUBKEY][ser_md_type_idx] = session_id
+
+ if write:
+ jlog.debug('DKGManager saving dkg data')
+ self.dkg_storage.save()
+ jlog.debug('DKGManager saving dkg recovery data')
+ self.recovery_storage.save()
+
+ def find_session(self, mixdepth, address_type, index):
+ md_type_idx = (mixdepth, address_type, index)
+ return self._dkg_sessions.get(md_type_idx, None)
+
+ def find_dkg_pubkey(self, mixdepth, address_type, index):
+ session = self.find_session(mixdepth, address_type, index)
+ if session:
+ return self._dkg_pubkey.get(session)
+
+ def add_party_data(self, *, session_id, dkg_output, hostpubkeys, t,
+ recovery_data, ext_recovery):
+ assert isinstance(dkg_output, tuple)
+ assert isinstance(dkg_output.secshare, bytes)
+ assert len(dkg_output.secshare) == 32
+ assert isinstance(dkg_output.threshold_pubkey, bytes)
+ assert len(dkg_output.threshold_pubkey) == 33
+ for pubshare in dkg_output.pubshares:
+ assert isinstance(pubshare, bytes)
+ assert len(pubshare) == 33
+ for hostpubkey in hostpubkeys:
+ assert isinstance(hostpubkey, bytes)
+ assert len(hostpubkey) == 33
+ assert isinstance(t, int)
+ assert isinstance(recovery_data, bytes)
+ self._dkg_secshare[session_id] = dkg_output.secshare
+ self._dkg_pubshares[session_id] = dkg_output.pubshares
+ self._dkg_pubkey[session_id] = dkg_output.threshold_pubkey
+ self._dkg_hostpubkeys[session_id] = hostpubkeys
+ self._dkg_t[session_id] = t
+
+ recovery_dkg = self.recovery_storage.data[self.RECOVERY_STORAGE_KEY]
+ recovery_dkg[session_id] = (ext_recovery, recovery_data)
+
+ self.save()
+
+ def add_coordinator_data(self, *, session_id, dkg_output, hostpubkeys, t,
+ recovery_data, ext_recovery):
+ assert isinstance(dkg_output, tuple)
+ assert isinstance(dkg_output.secshare, bytes)
+ assert len(dkg_output.secshare) == 32
+ assert isinstance(dkg_output.threshold_pubkey, bytes)
+ assert len(dkg_output.threshold_pubkey) == 33
+ for pubshare in dkg_output.pubshares:
+ assert isinstance(pubshare, bytes)
+ assert len(pubshare) == 33
+ for hostpubkey in hostpubkeys:
+ assert isinstance(hostpubkey, bytes)
+ assert len(hostpubkey) == 33
+ assert isinstance(t, int)
+ assert isinstance(recovery_data, bytes)
+ self._dkg_secshare[session_id] = dkg_output.secshare
+ self._dkg_pubshares[session_id] = dkg_output.pubshares
+ self._dkg_pubkey[session_id] = dkg_output.threshold_pubkey
+ self._dkg_hostpubkeys[session_id] = hostpubkeys
+ self._dkg_t[session_id] = t
+
+ privkey = self.wallet._hostseckey
+ ext_recovery_bytes = decrypt_ext_recovery(privkey, ext_recovery)
+ md_type_idx = deserialize_ext_recovery(ext_recovery_bytes)
+ if md_type_idx in self._dkg_sessions:
+ raise Exception(f'add_coordinator_data: {md_type_idx} '
+ f'already in _dkg_sessions')
+ jlog.debug(f'add_coordinator_data: adding data for {md_type_idx}, '
+ f'sesion_id={session_id.hex()} ')
+ self._dkg_sessions[md_type_idx] = session_id
+
+ recovery_dkg = self.recovery_storage.data[self.RECOVERY_STORAGE_KEY]
+ recovery_dkg[session_id] = (ext_recovery, recovery_data)
+
+ self.save()
+
+ async def dkg_recover(self, dkgrec_path):
+ rec_storage = DKGRecoveryStorage(
+ dkgrec_path, create=False, read_only=True)
+ rec_dkg = rec_storage.data[self.RECOVERY_STORAGE_KEY]
+ privkey = self.wallet._hostseckey
+ wallet_rec_dkg = self.recovery_storage.data[self.RECOVERY_STORAGE_KEY]
+ jlog.info(f'Found {len(rec_dkg)} records in the {dkgrec_path}')
+ for session_id, (ext_recovery, recovery_data) in rec_dkg.items():
+ jlog.info(f'Processing session_id {session_id.hex()}')
+ try:
+ ext_recovery_bytes = decrypt_ext_recovery(
+ privkey, ext_recovery)
+ md_type_idx = deserialize_ext_recovery(ext_recovery_bytes)
+ except btc.ECIESDecryptionError:
+ md_type_idx = None
+ dkg_output, session_params = chilldkg.recover(
+ privkey[:32], recovery_data)
+ if md_type_idx:
+ if md_type_idx not in self._dkg_sessions:
+ self._dkg_sessions[md_type_idx] = session_id
+ else:
+ old_sess_id = self._dkg_sessions[md_type_idx]
+ jlog.debug(f'dkg_recover: {md_type_idx} already in the'
+ f' DKG sessions with session_id'
+ f' {old_sess_id.hex()}, new session_id'
+ f' {session_id.hex()} ignored')
+ self._dkg_secshare[session_id] = dkg_output.secshare
+ self._dkg_pubshares[session_id] = dkg_output.pubshares
+ self._dkg_pubkey[session_id] = dkg_output.threshold_pubkey
+ self._dkg_hostpubkeys[session_id] = session_params.hostpubkeys
+ self._dkg_t[session_id] = session_params.t
+
+ wallet_rec_dkg[session_id] = (ext_recovery, recovery_data)
+
+ self.save()
+
+ def dkg_ls(self):
+ res = collections.defaultdict(dict)
+ for md_type_idx, session_id in self._dkg_sessions.items():
+ str_md_type_idx = (f'{md_type_idx[0]},{md_type_idx[1]}'
+ f',{md_type_idx[2]}')
+ res[self.SESSIONS_SUBKEY.decode()][str_md_type_idx] = \
+ session_id.hex()
+ for c in self._dkg_secshare.keys():
+ secshare_hex = f'{self._dkg_secshare[c].hex()}'
+ res[c.hex()][self.SECSHARE_SUBKEY.decode()] = secshare_hex
+ if c in self._dkg_pubkey:
+ pubkey_hex = f'{self._dkg_pubkey[c].hex()}'
+ res[c.hex()][self.PUBKEY_SUBKEY.decode()] = pubkey_hex
+ if c in self._dkg_hostpubkeys:
+ hostpubkeys_hex = [f'{hpk.hex()}'
+ for hpk in self._dkg_hostpubkeys[c]]
+ res[c.hex()][self.HOSTPUBKEYS_SUBKEY.decode()] = \
+ hostpubkeys_hex
+ if c in self._dkg_t:
+ res[c.hex()][self.T_SUBKEY.decode()] = self._dkg_t[c]
+ if c in self._dkg_pubshares:
+ pubshares_hex = [f'{ps.hex()}'
+ for ps in self._dkg_pubshares[c]]
+ res[c.hex()][self.PUBSHARES_SUBKEY.decode()] = pubshares_hex
+ return f'DKG data:\n{json.dumps(res, indent=4)}'
+
+ def dkg_rm(self, session_ids: list):
+ try:
+ res = ''
+ for sess_id in session_ids:
+ c = hextobin(sess_id)
+ if c in self._dkg_secshare:
+ self._dkg_secshare.pop(c, None)
+ self._dkg_pubshares.pop(c, None)
+ self._dkg_pubkey.pop(c, None)
+ self._dkg_hostpubkeys.pop(c, None)
+ self._dkg_t.pop(c, None)
+ self.save()
+ for md_type_idx in list(self._dkg_sessions.keys()):
+ if self._dkg_sessions[md_type_idx] == c:
+ self._dkg_sessions.pop(md_type_idx)
+ if c in self._dkg_secshare:
+ res += f'dkg data for session {sess_id} deleted\n'
+ else:
+ res +=f'not found dkg data for session {sess_id}\n'
+ self.save()
+ return res
+ except Exception as e:
+ jmprint(f'error: {repr(e)}', 'error')
+
+ def recdkg_ls(self):
+ dec_res = []
+ enc_res = []
+ recovery_dkg = self.recovery_storage.data[self.RECOVERY_STORAGE_KEY]
+ privkey = self.wallet._hostseckey
+ for session_id, (ext_recovery, recovery_data) in recovery_dkg.items():
+ decoded_ext_rec = False
+ try:
+ dec_ext_rec = btc.ecies_decrypt(privkey, ext_recovery).hex()
+ decoded_ext_rec = True
+ except btc.ECIESDecryptionError:
+ dec_ext_rec = ext_recovery.decode('ascii')
+ try:
+ dkg_output, session_params = chilldkg.recover(
+ privkey[:32], recovery_data)
+ dkg_output = chilldkg_hexlify(dkg_output)
+ session_params = chilldkg_hexlify(session_params)
+ recovery_data = {
+ 'dkg_output': dkg_output,
+ 'session_params': session_params,
+ }
+ except Exception as e:
+ jlog.warn(f'recdkg_ls: Can not recover data{repr(e)}')
+ recovery_data = recovery_data.hex()
+ if decoded_ext_rec:
+ dec_res.append([session_id.hex(), dec_ext_rec, recovery_data])
+ else:
+ enc_res.append([session_id.hex(), dec_ext_rec, recovery_data])
+ return (f'DKG recovery data (session_id, ext_recovery, recovery_data):'
+ f'\nDecrypted sesions:\n{json.dumps(dec_res, indent=4)}\n'
+ f'\nNot decrypted sesions:\n{json.dumps(enc_res, indent=4)}')
+
+ def recdkg_rm(self, session_ids: list):
+ res = ''
+ rm_sess_ids = []
+ not_found_ids = []
+ recovery_dkg = self.recovery_storage.data[self.RECOVERY_STORAGE_KEY]
+ for session_id, (ext_recovery, recovery_data) in recovery_dkg.items():
+ if session_id in session_ids:
+ rm_sess_ids.append(session_id)
+ else:
+ not_found_ids.append(session_id)
+ for session_id in rm_sess_ids:
+ del recovery_dkg[session_id]
+ res += (f'dkg recovery data for session {session_id.hex()}'
+ f' deleted\n')
+ for session_id in not_found_ids:
+ res += f'not found dkg data for session {session_id.hex()}\n'
+ self.save()
+ return res
+
+
class BaseWallet(object):
TYPE = None
@@ -411,7 +765,9 @@ class BaseWallet(object):
# {address: path}, should always hold mappings for all "known" keys
self._addr_map = {}
- self._load_storage(load_cache=load_cache)
+ async def async_init(self, storage, gap_limit=6, merge_algorithm_name=None,
+ mixdepth=None, load_cache=True):
+ await self._load_storage(load_cache=load_cache)
assert self._utxos is not None
assert self._cache is not None
@@ -441,7 +797,7 @@ class BaseWallet(object):
def gaplimit(self):
return self.gap_limit
- def _load_storage(self, load_cache: bool = True) -> None:
+ async def _load_storage(self, load_cache: bool = True) -> None:
"""
load data from storage
"""
@@ -514,7 +870,7 @@ class BaseWallet(object):
return 'p2pkh'
elif self.TYPE == TYPE_P2SH_P2WPKH:
return 'p2sh-p2wpkh'
- elif self.TYPE == TYPE_P2TR:
+ elif self.TYPE in (TYPE_P2TR, TYPE_P2TR_FROST):
return 'p2tr'
elif self.TYPE in (TYPE_P2WPKH,
TYPE_SEGWIT_WALLET_FIDELITY_BONDS):
@@ -566,7 +922,7 @@ class BaseWallet(object):
spent_outputs.append(prevtx.vout[n])
return spent_outputs
- def sign_tx(self, tx, scripts, **kwargs):
+ async def sign_tx(self, tx, scripts, **kwargs):
"""
Add signatures to transaction for inputs referenced by scripts.
@@ -581,14 +937,18 @@ class BaseWallet(object):
for index, (script, amount) in scripts.items():
assert amount > 0
path = self.script_to_path(script)
- privkey, engine = self._get_key_from_path(path)
spent_outputs = None # need for SignatureHashSchnorr
- if isinstance(self, TaprootWallet):
+ if isinstance(self, (TaprootWallet, FrostWallet)):
spent_outputs = self.get_spent_outputs(tx)
if spent_outputs:
kwargs['spent_outputs'] = spent_outputs
- sig, msg = engine.sign_transaction(tx, index, privkey,
- amount, **kwargs)
+ if isinstance(self, FrostWallet):
+ sig, msg = await self._ENGINE.sign_transaction(
+ tx, index, path, amount, wallet=self, **kwargs)
+ else:
+ privkey, engine = self._get_key_from_path(path)
+ sig, msg = await engine.sign_transaction(
+ tx, index, privkey, amount, **kwargs)
if not sig:
return False, msg
return True, None
@@ -602,49 +962,53 @@ class BaseWallet(object):
privkey = self._get_key_from_path(path)[0]
return privkey
- def get_external_addr(self, mixdepth):
+ async def get_external_addr(self, mixdepth):
"""
Return an address suitable for external distribution, including funding
the wallet from other sources, or receiving payments or donations.
JoinMarket will never generate these addresses for internal use.
"""
- if isinstance(self, TaprootWallet):
- pubkey = self.get_new_pubkey(mixdepth, self.ADDRESS_TYPE_EXTERNAL)
+ if isinstance(self, (TaprootWallet, FrostWallet)):
+ pubkey = await self.get_new_pubkey(
+ mixdepth, self.ADDRESS_TYPE_EXTERNAL)
return self.pubkey_to_addr(pubkey)
else:
- return self.get_new_addr(mixdepth, self.ADDRESS_TYPE_EXTERNAL)
+ return await self.get_new_addr(
+ mixdepth, self.ADDRESS_TYPE_EXTERNAL)
- def get_internal_addr(self, mixdepth):
+ async def get_internal_addr(self, mixdepth):
"""
Return an address for internal usage, as change addresses and when
participating in transactions initiated by other parties.
"""
- if isinstance(self, TaprootWallet):
- pubkey = self.get_new_pubkey(mixdepth, self.ADDRESS_TYPE_INTERNAL)
+ if isinstance(self, (TaprootWallet, FrostWallet)):
+ pubkey = await self.get_new_pubkey(
+ mixdepth, self.ADDRESS_TYPE_INTERNAL)
return self.pubkey_to_addr(pubkey)
else:
- return self.get_new_addr(mixdepth, self.ADDRESS_TYPE_INTERNAL)
+ return await self.get_new_addr(
+ mixdepth, self.ADDRESS_TYPE_INTERNAL)
- def get_external_pubkey(self, mixdepth):
+ async def get_external_pubkey(self, mixdepth):
"""
Return an pubkey suitable for external distribution, including funding
the wallet from other sources, or receiving payments or donations.
JoinMarket will never generate these addresses for internal use.
"""
- return self.get_new_pubkey(mixdepth, self.ADDRESS_TYPE_EXTERNAL)
+ return await self.get_new_pubkey(mixdepth, self.ADDRESS_TYPE_EXTERNAL)
- def get_internal_pubkey(self, mixdepth):
+ async def get_internal_pubkey(self, mixdepth):
"""
Return an pubkey for internal usage, as change addresses and when
participating in transactions initiated by other parties.
"""
- return self.get_new_pubkey(mixdepth, self.ADDRESS_TYPE_INTERNAL)
+ return await self.get_new_pubkey(mixdepth, self.ADDRESS_TYPE_INTERNAL)
- def get_external_script(self, mixdepth):
- return self.get_new_script(mixdepth, self.ADDRESS_TYPE_EXTERNAL)
+ async def get_external_script(self, mixdepth):
+ return await self.get_new_script(mixdepth, self.ADDRESS_TYPE_EXTERNAL)
- def get_internal_script(self, mixdepth):
- return self.get_new_script(mixdepth, self.ADDRESS_TYPE_INTERNAL)
+ async def get_internal_script(self, mixdepth):
+ return await self.get_new_script(mixdepth, self.ADDRESS_TYPE_INTERNAL)
@classmethod
def addr_to_script(cls, addr):
@@ -662,6 +1026,14 @@ class BaseWallet(object):
"""
return cls._ENGINE.pubkey_to_script(pubkey)
+ @classmethod
+ def output_pubkey_to_script(cls, pubkey):
+ """
+ Try not to call this slow method. Instead, call
+ get_script_from_path if possible, as that is cached.
+ """
+ return cls._ENGINE.output_pubkey_to_script(pubkey)
+
@classmethod
def pubkey_to_addr(cls, pubkey):
"""
@@ -670,13 +1042,13 @@ class BaseWallet(object):
"""
return cls._ENGINE.pubkey_to_address(pubkey)
- def script_to_addr(self, script,
+ async def script_to_addr(self, script,
validate_cache: bool = False):
path = self.script_to_path(script)
- return self.get_address_from_path(path,
+ return await self.get_address_from_path(path,
validate_cache=validate_cache)
- def get_script_code(self, script):
+ async def get_script_code(self, script):
"""
For segwit wallets, gets the value of the scriptCode
parameter required (see BIP143) for sighashing; this is
@@ -685,7 +1057,7 @@ class BaseWallet(object):
For non-segwit wallets, raises EngineError.
"""
path = self.script_to_path(script)
- pub, engine = self._get_pubkey_from_path(path)
+ pub, engine = await self._get_pubkey_from_path(path)
return engine.pubkey_to_script_code(pub)
@classmethod
@@ -696,31 +1068,36 @@ class BaseWallet(object):
def pubkey_has_script(cls, pubkey, script):
return cls._ENGINE.pubkey_has_script(pubkey, script)
+ @classmethod
+ def output_pubkey_has_script(cls, pubkey, script):
+ return cls._ENGINE.output_pubkey_has_script(pubkey, script)
+
@deprecated
def get_key(self, mixdepth, address_type, index):
raise NotImplementedError()
- def get_addr(self, mixdepth, address_type, index,
+ async def get_addr(self, mixdepth, address_type, index,
validate_cache: bool = False):
path = self.get_path(mixdepth, address_type, index)
- return self.get_address_from_path(path,
+ return await self.get_address_from_path(path,
validate_cache=validate_cache)
- def get_pubkey(self, mixdepth, address_type, index,
+ async def get_pubkey(self, mixdepth, address_type, index,
validate_cache: bool = False):
path = self.get_path(mixdepth, address_type, index)
- return self._get_pubkey_from_path(
- path, validate_cache=validate_cache)[0]
+ pubkey, _ = await self._get_pubkey_from_path(
+ path, validate_cache=validate_cache)
+ return pubkey
- def get_address_from_path(self, path,
+ async def get_address_from_path(self, path,
validate_cache: bool = False):
cache = self._get_cache_for_path(path)
addr = cache.get(b'A')
if addr is not None:
addr = addr.decode('ascii')
if addr is None or validate_cache:
- engine = self._get_pubkey_from_path(path)[1]
- script = self.get_script_from_path(path,
+ _, engine = await self._get_pubkey_from_path(path)
+ script = await self.get_script_from_path(path,
validate_cache=validate_cache)
new_addr = engine.script_to_address(script)
if addr is None:
@@ -730,25 +1107,26 @@ class BaseWallet(object):
raise WalletCacheValidationFailed()
return addr
- def get_new_addr(self, mixdepth, address_type,
+ async def get_new_addr(self, mixdepth, address_type,
validate_cache: bool = True):
"""
use get_external_addr/get_internal_addr
"""
- script = self.get_new_script(mixdepth, address_type,
+ script = await self.get_new_script(mixdepth, address_type,
validate_cache=validate_cache)
- return self.script_to_addr(script,
+ return await self.script_to_addr(script,
validate_cache=validate_cache)
- def get_new_pubkey(self, mixdepth, address_type,
+ async def get_new_pubkey(self, mixdepth, address_type,
validate_cache: bool = True):
- script = self.get_new_script(
+ script = await self.get_new_script(
mixdepth, address_type, validate_cache=validate_cache)
path = self.script_to_path(script)
- return self._get_pubkey_from_path(
- path, validate_cache=validate_cache)[0]
+ pubkey, _ = await self._get_pubkey_from_path(
+ path, validate_cache=validate_cache)
+ return pubkey
- def get_new_script(self, mixdepth, address_type,
+ async def get_new_script(self, mixdepth, address_type,
validate_cache: bool = True):
raise NotImplementedError()
@@ -782,7 +1160,7 @@ class BaseWallet(object):
"""
self.save()
- def remove_old_utxos(self, tx):
+ async def remove_old_utxos(self, tx):
"""
Remove all own inputs of tx from internal utxo list.
@@ -799,7 +1177,7 @@ class BaseWallet(object):
if md is False:
continue
path, value, height = self._utxos.remove_utxo(txid, index, md)
- script = self.get_script_from_path(path)
+ script = await self.get_script_from_path(path)
removed_utxos[(txid, index)] = {'script': script,
'path': path,
'value': value}
@@ -865,7 +1243,7 @@ class BaseWallet(object):
retval.append((i, txin))
return retval
- def process_new_tx(self, txd, height=None):
+ async def process_new_tx(self, txd, height=None):
""" Given a newly seen transaction, deserialized as
CMutableTransaction txd,
process its inputs and outputs and update
@@ -875,11 +1253,11 @@ class BaseWallet(object):
obviously) utxos that were not related since the underlying
functions check this condition.
"""
- removed_utxos = self.remove_old_utxos(txd)
+ removed_utxos = await self.remove_old_utxos(txd)
added_utxos = self.add_new_utxos(txd, height=height)
return (removed_utxos, added_utxos)
- def select_utxos(self, mixdepth, amount, utxo_filter=None,
+ async def select_utxos(self, mixdepth, amount, utxo_filter=None,
select_fn=None, maxheight=None, includeaddr=False,
require_auth_address=False):
"""
@@ -914,23 +1292,24 @@ class BaseWallet(object):
assert len(i) == 2
assert isinstance(i[0], bytes)
assert isinstance(i[1], numbers.Integral)
- utxos = self._utxos.select_utxos(
+ utxos = await self._utxos.select_utxos(
mixdepth, amount, utxo_filter, select_fn, maxheight=maxheight)
total_value = 0
standard_utxo = None
for key, data in utxos.items():
- if self.is_standard_wallet_script(data['path']):
+ if await self.is_standard_wallet_script(data['path']):
standard_utxo = key
total_value += data['value']
- data['script'] = self.get_script_from_path(data['path'])
+ data['script'] = await self.get_script_from_path(data['path'])
if includeaddr:
- data["address"] = self.get_address_from_path(data["path"])
+ data["address"] = await self.get_address_from_path(
+ data["path"])
if require_auth_address and not standard_utxo:
# try to select more utxos, hoping for a standard one
try:
- return self.select_utxos(
+ return await self.select_utxos(
mixdepth, total_value + 1, utxo_filter, select_fn,
maxheight, includeaddr, require_auth_address)
except NotEnoughFundsException:
@@ -978,7 +1357,7 @@ class BaseWallet(object):
return self._utxos.get_balance_at_mixdepth(mixdepth,
include_disabled=include_disabled, maxheight=maxheight)
- def get_utxos_by_mixdepth(self, include_disabled: bool = False,
+ async def get_utxos_by_mixdepth(self, include_disabled: bool = False,
includeheight: bool = False,
limit_mixdepth: Optional[int] = None
) -> collections.defaultdict:
@@ -992,28 +1371,28 @@ class BaseWallet(object):
"""
script_utxos = collections.defaultdict(dict)
if limit_mixdepth:
- script_utxos[limit_mixdepth] = self.get_utxos_at_mixdepth(
+ script_utxos[limit_mixdepth] = await self.get_utxos_at_mixdepth(
mixdepth=limit_mixdepth, include_disabled=include_disabled,
includeheight=includeheight)
else:
for md in range(self.mixdepth + 1):
- script_utxos[md] = self.get_utxos_at_mixdepth(md,
+ script_utxos[md] = await self.get_utxos_at_mixdepth(md,
include_disabled=include_disabled,
includeheight=includeheight)
return script_utxos
- def get_utxos_at_mixdepth(self, mixdepth: int,
+ async def get_utxos_at_mixdepth(self, mixdepth: int,
include_disabled: bool = False,
includeheight: bool = False) -> \
Dict[Tuple[bytes, int], Dict[str, Any]]:
script_utxos = {}
if 0 <= mixdepth <= self.mixdepth:
- data = self._utxos.get_utxos_at_mixdepth(mixdepth)
+ data = await self._utxos.get_utxos_at_mixdepth(mixdepth)
for utxo, (path, value, height) in data.items():
if not include_disabled and self._utxos.is_disabled(*utxo):
continue
- script = self.get_script_from_path(path)
- addr = self.get_address_from_path(path)
+ script = await self.get_script_from_path(path)
+ addr = await self.get_address_from_path(path)
label = self.get_address_label(addr)
script_utxo = {
'script': script,
@@ -1028,11 +1407,11 @@ class BaseWallet(object):
return script_utxos
- def get_all_utxos(self, include_disabled=False):
+ async def get_all_utxos(self, include_disabled=False):
""" Get all utxos in the wallet, format of return
is as for get_utxos_by_mixdepth for each mixdepth.
"""
- mix_utxos = self.get_utxos_by_mixdepth(
+ mix_utxos = await self.get_utxos_by_mixdepth(
include_disabled=include_disabled)
all_utxos = {}
for d in mix_utxos.values():
@@ -1057,7 +1436,7 @@ class BaseWallet(object):
def _get_mixdepth_from_path(self, path):
raise NotImplementedError()
- def get_script_from_path(self, path,
+ async def get_script_from_path(self, path,
validate_cache: bool = False):
"""
internal note: This is the final sink for all operations that somehow
@@ -1072,7 +1451,7 @@ class BaseWallet(object):
cache = self._get_cache_for_path(path)
script = cache.get(b'S')
if script is None or validate_cache:
- pubkey, engine = self._get_pubkey_from_path(path,
+ pubkey, engine = await self._get_pubkey_from_path(path,
validate_cache=validate_cache)
new_script = engine.pubkey_to_script(pubkey)
if script is None:
@@ -1081,16 +1460,17 @@ class BaseWallet(object):
raise WalletCacheValidationFailed()
return script
- def get_script(self, mixdepth, address_type, index,
+ async def get_script(self, mixdepth, address_type, index,
validate_cache: bool = False):
path = self.get_path(mixdepth, address_type, index)
- return self.get_script_from_path(path, validate_cache=validate_cache)
+ return await self.get_script_from_path(
+ path, validate_cache=validate_cache)
def _get_key_from_path(self, path,
validate_cache: bool = False):
raise NotImplementedError()
- def _get_keypair_from_path(self, path,
+ async def _get_keypair_from_path(self, path,
validate_cache: bool = False):
privkey, engine = self._get_key_from_path(path,
validate_cache=validate_cache)
@@ -1104,9 +1484,9 @@ class BaseWallet(object):
raise WalletCacheValidationFailed()
return privkey, pubkey, engine
- def _get_pubkey_from_path(self, path,
+ async def _get_pubkey_from_path(self, path,
validate_cache: bool = False):
- privkey, pubkey, engine = self._get_keypair_from_path(path,
+ privkey, pubkey, engine = await self._get_keypair_from_path(path,
validate_cache=validate_cache)
return pubkey, engine
@@ -1177,7 +1557,7 @@ class BaseWallet(object):
"""
raise NotImplementedError()
- def sign_message(self, message, path):
+ async def sign_message(self, message, path):
"""
Sign the message using the key referenced by path.
@@ -1189,7 +1569,7 @@ class BaseWallet(object):
signature as base64-encoded string
"""
priv, engine = self._get_key_from_path(path)
- addr = self.get_address_from_path(path)
+ addr = await self.get_address_from_path(path)
return addr, engine.sign_message(priv, message)
def get_wallet_name(self):
@@ -1224,7 +1604,7 @@ class BaseWallet(object):
"""
return iter([])
- def is_standard_wallet_script(self, path):
+ async def is_standard_wallet_script(self, path):
"""
Check if the path's script is of the same type as the standard wallet
key type.
@@ -1277,10 +1657,10 @@ class BaseWallet(object):
for path in self.yield_imported_paths(md):
yield path
- def _populate_maps(self, paths):
+ async def _populate_maps(self, paths):
for path in paths:
- self._script_map[self.get_script_from_path(path)] = path
- self._addr_map[self.get_address_from_path(path)] = path
+ self._script_map[await self.get_script_from_path(path)] = path
+ self._addr_map[await self.get_address_from_path(path)] = path
def addr_to_path(self, addr):
assert isinstance(addr, str)
@@ -1381,6 +1761,9 @@ class PSBTWalletMixin(object):
def __init__(self, storage, **kwargs):
super().__init__(storage, **kwargs)
+ async def async_init(self, storage, **kwargs):
+ await super().async_init(storage, **kwargs)
+
def is_input_finalized(self, psbt_input):
""" This should be a convenience method in python-bitcointx.
However note: this is not a static method and tacitly
@@ -1547,7 +1930,8 @@ class PSBTWalletMixin(object):
else:
return False
- def create_psbt_from_tx(self, tx, spent_outs=None, force_witness_utxo=True):
+ async def create_psbt_from_tx(self, tx, spent_outs=None,
+ force_witness_utxo=True):
""" Given a CMutableTransaction object, which should not currently
contain signatures, we create and return a new PSBT object of type
btc.PartiallySignedTransaction.
@@ -1594,11 +1978,11 @@ class PSBTWalletMixin(object):
# this happens when an input is provided but it's not in
# this wallet; in this case, we cannot set the redeem script.
continue
- pubkey = self._get_pubkey_from_path(path)[0]
+ pubkey, _ = await self._get_pubkey_from_path(path)
txinput.redeem_script = btc.pubkey_to_p2wpkh_script(pubkey)
return new_psbt
- def sign_psbt(self, in_psbt, with_sign_result=False):
+ async def sign_psbt(self, in_psbt, with_sign_result=False):
""" Given a serialized PSBT in raw binary format,
iterate over the inputs and sign all that we can sign with this wallet.
NB IT IS UP TO CALLERS TO ENSURE THAT THEY ACTUALLY WANT TO SIGN
@@ -1665,7 +2049,7 @@ class PSBTWalletMixin(object):
# this happens when an input is provided but it's not in
# this wallet; in this case, we cannot set the redeem script.
continue
- pubkey = self._get_pubkey_from_path(path)[0]
+ pubkey, _ = await self._get_pubkey_from_path(path)
txinput.redeem_script = btc.pubkey_to_p2wpkh_script(pubkey)
# no else branch; any other form of scriptPubKey will just be
# ignored.
@@ -1685,8 +2069,11 @@ class SNICKERWalletMixin(object):
def __init__(self, storage, **kwargs):
super().__init__(storage, **kwargs)
- def check_tweak_matches_and_import(self, addr, tweak, tweaked_key,
- source_mixdepth):
+ async def async_init(self, storage, **kwargs):
+ await super().async_init(storage, **kwargs)
+
+ async def check_tweak_matches_and_import(self, addr, tweak, tweaked_key,
+ source_mixdepth):
""" Given the address from our HD wallet, the tweak bytes and
the tweaked public key, check the tweak correctly generates the
claimed tweaked public key. If not, (False, errmsg is returned),
@@ -1707,16 +2094,18 @@ class SNICKERWalletMixin(object):
# note that the WIF is not preserving SPK type, it's implied
# for the wallet.
try:
- self.import_private_key((source_mixdepth + 1) % (self.mixdepth + 1),
- self._ENGINE.privkey_to_wif(tweaked_privkey))
+ await self.import_private_key(
+ (source_mixdepth + 1) % (self.mixdepth + 1),
+ self._ENGINE.privkey_to_wif(tweaked_privkey))
self.save()
except:
return False, "Failed to import private key."
return True, None
- def create_snicker_proposal(self, our_inputs, their_input, our_input_utxos,
- their_input_utxo, net_transfer, network_fee,
- our_priv, their_pub, our_spk, change_spk,
+ async def create_snicker_proposal(self, our_inputs, their_input,
+ our_input_utxos, their_input_utxo,
+ net_transfer, network_fee, our_priv,
+ their_pub, our_spk, change_spk,
encrypted=True, version_byte=1):
""" Creates a SNICKER proposal from the given transaction data.
This only applies to existing specification, i.e. SNICKER v 00 or 01.
@@ -1801,10 +2190,10 @@ class SNICKERWalletMixin(object):
tx = btc.mktx(all_inputs, outputs,
version=2, locktime=0)
# create the psbt and then sign our input.
- snicker_psbt = self.create_psbt_from_tx(tx,
- spent_outs=all_input_utxos)
+ snicker_psbt = await self.create_psbt_from_tx(
+ tx, spent_outs=all_input_utxos)
# having created the PSBT, sign our input
- signed_psbt_and_signresult, err = self.sign_psbt(
+ signed_psbt_and_signresult, err = await self.sign_psbt(
snicker_psbt.serialize(), with_sign_result=True)
assert err is None
signresult, partially_signed_psbt = signed_psbt_and_signresult
@@ -1822,8 +2211,8 @@ class SNICKERWalletMixin(object):
# we apply ECIES in the form given by the BIP.
return btc.ecies_encrypt(snicker_serialized_message, their_pub)
- def parse_proposal_to_signed_tx(self, addr, proposal,
- acceptance_callback):
+ async def parse_proposal_to_signed_tx(self, addr, proposal,
+ acceptance_callback):
""" Given a candidate privkey (binary and compressed format),
and a candidate encrypted SNICKER proposal, attempt to decrypt
and validate it in all aspects. If validation fails the first
@@ -1942,15 +2331,28 @@ class SNICKERWalletMixin(object):
assert unsigned_index != -1
# All validation checks passed. We now check whether the
#transaction is acceptable according to the caller:
- if not acceptance_callback([utx.vin[unsigned_index]],
- [x for i, x in enumerate(utx.vin) if i != unsigned_index],
- [utx.vout[our_output_index]],
- [x for i, x in enumerate(utx.vout) if i != our_output_index]):
- return None, "Caller rejected transaction for signing."
+ if asyncio.iscoroutine(acceptance_callback):
+ if not await acceptance_callback(
+ [utx.vin[unsigned_index]],
+ [x for i, x in enumerate(utx.vin)
+ if i != unsigned_index],
+ [utx.vout[our_output_index]],
+ [x for i, x in enumerate(utx.vout)
+ if i != our_output_index]):
+ return None, "Caller rejected transaction for signing."
+ else:
+ if not acceptance_callback(
+ [utx.vin[unsigned_index]],
+ [x for i, x in enumerate(utx.vin)
+ if i != unsigned_index],
+ [utx.vout[our_output_index]],
+ [x for i, x in enumerate(utx.vout)
+ if i != our_output_index]):
+ return None, "Caller rejected transaction for signing."
# Acceptance passed, prepare the deserialized tx for signing by us:
- signresult_and_signedpsbt, err = self.sign_psbt(cpsbt.serialize(),
- with_sign_result=True)
+ signresult_and_signedpsbt, err = await self.sign_psbt(
+ cpsbt.serialize(), with_sign_result=True)
if err:
return None, "Unable to sign proposed PSBT, reason: " + err
signresult, signed_psbt = signresult_and_signedpsbt
@@ -1974,13 +2376,16 @@ class ImportWalletMixin(object):
# path is (_IMPORTED_ROOT_PATH, mixdepth, key_index)
super().__init__(storage, **kwargs)
- def _load_storage(self, load_cache: bool = True) -> None:
- super()._load_storage(load_cache=load_cache)
+ async def async_init(self, storage, **kwargs):
+ await super().async_init(storage, **kwargs)
+
+ async def _load_storage(self, load_cache: bool = True) -> None:
+ await super()._load_storage(load_cache=load_cache)
self._imported = collections.defaultdict(list)
for md, keys in self._storage.data[self._IMPORTED_STORAGE_KEY].items():
md = int(md)
self._imported[md] = keys
- self._populate_maps(self.yield_imported_paths(md))
+ await self._populate_maps(self.yield_imported_paths(md))
def save(self):
import_data = {}
@@ -1999,7 +2404,7 @@ class ImportWalletMixin(object):
if write:
storage.save()
- def import_private_key(self, mixdepth, wif):
+ async def import_private_key(self, mixdepth, wif):
"""
Import a private key in WIF format.
@@ -2033,10 +2438,10 @@ class ImportWalletMixin(object):
"".format(wif))
self._imported[mixdepth].append((privkey, key_type))
- return self._cache_imported_key(mixdepth, privkey, key_type,
- len(self._imported[mixdepth]) - 1)
+ return await self._cache_imported_key(
+ mixdepth, privkey, key_type, len(self._imported[mixdepth]) - 1)
- def remove_imported_key(self, script=None, address=None, path=None):
+ async def remove_imported_key(self, script=None, address=None, path=None):
"""
Remove an imported key. Arguments are exclusive.
@@ -2062,9 +2467,9 @@ class ImportWalletMixin(object):
assert len(path) == 3
if not script:
- script = self.get_script_from_path(path)
+ script = await self.get_script_from_path(path)
if not address:
- address = self.get_address_from_path(path)
+ address = await self.get_address_from_path(path)
# we need to retain indices
self._imported[path[1]][path[2]] = (b'', -1)
@@ -2073,9 +2478,9 @@ class ImportWalletMixin(object):
del self._addr_map[address]
self._delete_cache_for_path(path)
- def _cache_imported_key(self, mixdepth, privkey, key_type, index):
+ async def _cache_imported_key(self, mixdepth, privkey, key_type, index):
path = (self._IMPORTED_ROOT_PATH, mixdepth, index)
- self._populate_maps((path,))
+ await self._populate_maps((path,))
return path
def _get_mixdepth_from_path(self, path):
@@ -2110,11 +2515,11 @@ class ImportWalletMixin(object):
def _is_imported_path(cls, path):
return len(path) == 3 and path[0] == cls._IMPORTED_ROOT_PATH
- def is_standard_wallet_script(self, path):
+ async def is_standard_wallet_script(self, path):
if self._is_imported_path(path):
- engine = self._get_pubkey_from_path(path)[1]
+ _, engine = await self._get_pubkey_from_path(path)
return engine == self._ENGINE
- return super().is_standard_wallet_script(path)
+ return await super().is_standard_wallet_script(path)
def path_repr_to_path(self, pathstr):
spath = pathstr.encode('ascii').split(b'/')
@@ -2151,8 +2556,8 @@ class BIP39WalletMixin(object):
_BIP39_EXTENSION_KEY = b'seed_extension'
MNEMONIC_LANG = 'english'
- def _load_storage(self, load_cache: bool = True) -> None:
- super()._load_storage(load_cache=load_cache)
+ async def _load_storage(self, load_cache: bool = True) -> None:
+ await super()._load_storage(load_cache=load_cache)
self._entropy_extension = self._storage.data.get(self._BIP39_EXTENSION_KEY)
@classmethod
@@ -2221,6 +2626,9 @@ class BIP32Wallet(BaseWallet):
# m is the master key's fingerprint
# other levels are ints
super().__init__(storage, **kwargs)
+
+ async def async_init(self, storage, **kwargs):
+ await super().async_init(storage, **kwargs)
assert self._index_cache is not None
assert self._verify_entropy(self._entropy)
@@ -2231,8 +2639,8 @@ class BIP32Wallet(BaseWallet):
# used to verify paths for sanity checking and for wallet id creation
self._key_ident = b'' # otherwise get_bip32_* won't work
- self._key_ident = self._get_key_ident()
- self._populate_maps(self.yield_known_bip32_paths())
+ self._key_ident = await self._get_key_ident()
+ await self._populate_maps(self.yield_known_bip32_paths())
self.disable_new_scripts = False
@classmethod
@@ -2258,8 +2666,8 @@ class BIP32Wallet(BaseWallet):
if write:
storage.save()
- def _load_storage(self, load_cache: bool = True) -> None:
- super()._load_storage(load_cache=load_cache)
+ async def _load_storage(self, load_cache: bool = True) -> None:
+ await super()._load_storage(load_cache=load_cache)
self._entropy = self._storage.data[self._STORAGE_ENTROPY_KEY]
self._index_cache = collections.defaultdict(
@@ -2273,7 +2681,7 @@ class BIP32Wallet(BaseWallet):
self.max_mixdepth = max(0, 0, *self._index_cache.keys())
- def _get_key_ident(self):
+ async def _get_key_ident(self):
return sha256(sha256(
self.get_bip32_priv_export(0, self.BIP32_EXT_ID).encode('ascii')).digest())\
.digest()[:3]
@@ -2320,7 +2728,7 @@ class BIP32Wallet(BaseWallet):
def _get_supported_address_types(cls):
return (cls.BIP32_EXT_ID, cls.BIP32_INT_ID)
- def _check_path(self, path):
+ async def _check_path(self, path):
md, address_type, index = self.get_details(path)
if not 0 <= md <= self.max_mixdepth:
@@ -2334,20 +2742,20 @@ class BIP32Wallet(BaseWallet):
#special case for timelocked addresses because for them the
#concept of a "next address" cant be used
self._set_index_cache(md, address_type, current_index + 1)
- self._populate_maps((path,))
+ await self._populate_maps((path,))
- def get_script_from_path(self, path,
+ async def get_script_from_path(self, path,
validate_cache: bool = False):
if self._is_my_bip32_path(path):
- self._check_path(path)
- return super().get_script_from_path(path,
+ await self._check_path(path)
+ return await super().get_script_from_path(path,
validate_cache=validate_cache)
- def get_address_from_path(self, path,
+ async def get_address_from_path(self, path,
validate_cache: bool = False):
if self._is_my_bip32_path(path):
- self._check_path(path)
- return super().get_address_from_path(path,
+ await self._check_path(path)
+ return await super().get_address_from_path(path,
validate_cache=validate_cache)
def get_path(self, mixdepth=None, address_type=None, index=None):
@@ -2426,10 +2834,10 @@ class BIP32Wallet(BaseWallet):
raise WalletCacheValidationFailed()
return privkey, self._ENGINE
- def _get_keypair_from_path(self, path,
+ async def _get_keypair_from_path(self, path,
validate_cache: bool = False):
if not self._is_my_bip32_path(path):
- return super()._get_keypair_from_path(path,
+ return await super()._get_keypair_from_path(path,
validate_cache=validate_cache)
cache = self._get_cache_for_path(path)
privkey = cache.get(b'p')
@@ -2457,16 +2865,16 @@ class BIP32Wallet(BaseWallet):
def _is_my_bip32_path(self, path):
return len(path) > 0 and path[0] == self._key_ident
- def is_standard_wallet_script(self, path):
+ async def is_standard_wallet_script(self, path):
return self._is_my_bip32_path(path)
- def get_new_script(self, mixdepth, address_type,
+ async def get_new_script(self, mixdepth, address_type,
validate_cache: bool = True):
if self.disable_new_scripts:
raise RuntimeError("Obtaining new wallet addresses "
+ "disabled, due to nohistory mode")
index = self._index_cache[mixdepth][address_type]
- return self.get_script(mixdepth, address_type, index,
+ return await self.get_script(mixdepth, address_type, index,
validate_cache=validate_cache)
def _set_index_cache(self, mixdepth, address_type, index):
@@ -2622,7 +3030,10 @@ class FidelityBondMixin(object):
def __init__(self, storage, **kwargs):
super().__init__(storage, **kwargs)
- self._populate_maps(self.yield_fidelity_bond_paths())
+
+ async def async_init(self, storage, **kwargs):
+ await super().async_init(storage, **kwargs)
+ await self._populate_maps(self.yield_fidelity_bond_paths())
@classmethod
def _time_number_to_timestamp(cls, timenumber):
@@ -2665,15 +3076,15 @@ class FidelityBondMixin(object):
def is_timelocked_path(cls, path):
return len(path) > 4 and path[4] == cls.BIP32_TIMELOCK_ID
- def _get_key_ident(self):
+ async def _get_key_ident(self):
first_path = self.get_path(0, BIP32Wallet.BIP32_EXT_ID)
- pub = self._get_pubkey_from_path(first_path)[0]
+ pub, _ = await self._get_pubkey_from_path(first_path)
return sha256(sha256(pub).digest()).digest()[:3]
- def is_standard_wallet_script(self, path):
+ async def is_standard_wallet_script(self, path):
if self.is_timelocked_path(path):
return False
- return super().is_standard_wallet_script(path)
+ return await super().is_standard_wallet_script(path)
@classmethod
def get_xpub_from_fidelity_bond_master_pub_key(cls, mpk):
@@ -2732,10 +3143,10 @@ class FidelityBondMixin(object):
else:
return super()._get_key_from_path(path)
- def _get_keypair_from_path(self, path,
+ async def _get_keypair_from_path(self, path,
validate_cache: bool = False):
if not self.is_timelocked_path(path):
- return super()._get_keypair_from_path(path,
+ return await super()._get_keypair_from_path(path,
validate_cache=validate_cache)
key_path = path[:-1]
locktime = path[-1]
@@ -2893,10 +3304,494 @@ class SegwitWalletFidelityBonds(FidelityBondMixin, SegwitWallet):
TYPE = TYPE_SEGWIT_WALLET_FIDELITY_BONDS
class TaprootWallet(BIP39WalletMixin, BIP86Wallet):
- # FIXME add other mixins if adapted
TYPE = TYPE_P2TR
+class BIP32FrostMixin(BaseWallet):
+
+ _STORAGE_ENTROPY_KEY = b'entropy'
+ _STORAGE_INDEX_CACHE = b'index_cache'
+ BIP32_MAX_PATH_LEVEL = 2**31
+ BIP32_EXT_ID = BaseWallet.ADDRESS_TYPE_EXTERNAL
+ BIP32_INT_ID = BaseWallet.ADDRESS_TYPE_INTERNAL
+ ENTROPY_BYTES = 16
+
+ def __init__(self, storage, **kwargs):
+ self._entropy = None
+ self._key_ident = None
+ self._index_cache = None
+ super().__init__(storage, **kwargs)
+
+ async def async_init(self, storage, **kwargs):
+ await super().async_init(storage, **kwargs)
+ assert self._index_cache is not None
+ assert self._verify_entropy(self._entropy)
+
+ _master_entropy = self._create_master_key()
+ assert _master_entropy
+ assert isinstance(_master_entropy, bytes)
+ self._master_key = self._derive_bip32_master_key(_master_entropy)
+
+ self._key_ident = b'' # otherwise get_bip32_* won't work
+ self._key_ident = await self._get_key_ident()
+ await self._populate_maps(self.yield_known_bip32_paths())
+ self.disable_new_scripts = False
+
+ @classmethod
+ def initialize(cls, storage, network, max_mixdepth=2, timestamp=None,
+ entropy=None, write=True):
+ if entropy and not cls._verify_entropy(entropy):
+ raise WalletError("Invalid entropy.")
+
+ super(BIP32FrostMixin, cls).initialize(
+ storage, network, max_mixdepth, timestamp, write=False)
+
+ if not entropy:
+ entropy = get_random_bytes(cls.ENTROPY_BYTES, True)
+
+ storage.data[cls._STORAGE_ENTROPY_KEY] = entropy
+ storage.data[cls._STORAGE_INDEX_CACHE] = {
+ _int_to_bytestr(i): {} for i in range(max_mixdepth + 1)}
+
+ if write:
+ storage.save()
+
+ async def _load_storage(self, load_cache: bool = True) -> None:
+ await super()._load_storage(load_cache=load_cache)
+ self._entropy = self._storage.data[self._STORAGE_ENTROPY_KEY]
+
+ self._index_cache = collections.defaultdict(
+ lambda: collections.defaultdict(int))
+
+ for md, data in self._storage.data[self._STORAGE_INDEX_CACHE].items():
+ md = int(md)
+ md_map = self._index_cache[md]
+ for t, k in data.items():
+ md_map[int(t)] = k
+
+ self.max_mixdepth = max(0, 0, *self._index_cache.keys())
+
+ async def _get_key_ident(self):
+ return sha256(
+ sha256(
+ self.get_bip32_priv_export(
+ 0, self.BIP32_EXT_ID
+ ).encode('ascii')
+ ).digest()
+ ).digest()[:3]
+
+ def yield_known_paths(self):
+ return self.yield_known_bip32_paths()
+
+ def yield_known_bip32_paths(self):
+ for md in self._index_cache:
+ for address_type in (self.BIP32_EXT_ID, self.BIP32_INT_ID):
+ for i in range(self._index_cache[md][address_type]):
+ yield self.get_path(md, address_type, i)
+
+ def save(self):
+ for md, data in self._index_cache.items():
+ str_data = {}
+ str_md = _int_to_bytestr(md)
+
+ for t, k in data.items():
+ str_data[_int_to_bytestr(t)] = k
+
+ self._storage.data[self._STORAGE_INDEX_CACHE][str_md] = str_data
+
+ super().save()
+
+ def _create_master_key(self):
+ return self._entropy
+
+ @classmethod
+ def _verify_entropy(cls, ent):
+ return bool(ent)
+
+ @classmethod
+ def _derive_bip32_master_key(cls, seed):
+ return cls._ENGINE.derive_bip32_master_key(seed)
+
+ @classmethod
+ def _get_supported_address_types(cls):
+ return (cls.BIP32_EXT_ID, cls.BIP32_INT_ID)
+
+ async def _check_path(self, path):
+ md, address_type, index = self.get_details(path)
+
+ if not 0 <= md <= self.max_mixdepth:
+ raise WalletMixdepthOutOfRange()
+ assert address_type in self._get_supported_address_types()
+
+ current_index = self._index_cache[md][address_type]
+
+ if index == current_index \
+ and address_type != FidelityBondMixin.BIP32_TIMELOCK_ID:
+ #special case for timelocked addresses because for them the
+ #concept of a "next address" cant be used
+ self._set_index_cache(md, address_type, current_index + 1)
+ await self._populate_maps((path,))
+
+ async def get_script_from_path(self, path,
+ validate_cache: bool = False):
+ if self._is_my_bip32_path(path):
+ await self._check_path(path)
+ return await super().get_script_from_path(path,
+ validate_cache=validate_cache)
+
+ async def get_address_from_path(self, path,
+ validate_cache: bool = False):
+ if self._is_my_bip32_path(path):
+ await self._check_path(path)
+ return await super().get_address_from_path(path,
+ validate_cache=validate_cache)
+
+ def get_path(self, mixdepth=None, address_type=None, index=None):
+ if mixdepth is not None:
+ assert isinstance(mixdepth, Integral)
+ if not 0 <= mixdepth <= self.max_mixdepth:
+ raise WalletMixdepthOutOfRange()
+
+ if address_type is not None:
+ if mixdepth is None:
+ raise Exception("mixdepth must be set if address_type is set")
+
+ if index is not None:
+ assert isinstance(index, Integral)
+ if address_type is None:
+ raise Exception("address_type must be set if index is set")
+ assert index < self.BIP32_MAX_PATH_LEVEL
+ return tuple(chain(self._get_bip32_export_path(mixdepth, address_type),
+ (index,)))
+
+ return tuple(self._get_bip32_export_path(mixdepth, address_type))
+
+ def get_path_repr(self, path):
+ path = list(path)
+ assert self._is_my_bip32_path(path)
+ path.pop(0)
+ return 'm' + '/' + '/'.join(map(self._path_level_to_repr, path))
+
+ @classmethod
+ def _harden_path_level(cls, lvl):
+ assert isinstance(lvl, Integral)
+ if not 0 <= lvl < cls.BIP32_MAX_PATH_LEVEL:
+ raise WalletError("Unable to derive hardened path level from {}."
+ "".format(lvl))
+ return lvl + cls.BIP32_MAX_PATH_LEVEL
+
+ @classmethod
+ def _path_level_to_repr(cls, lvl):
+ assert isinstance(lvl, Integral)
+ if not 0 <= lvl < cls.BIP32_MAX_PATH_LEVEL * 2:
+ raise WalletError("Invalid path level {}.".format(lvl))
+ if lvl < cls.BIP32_MAX_PATH_LEVEL:
+ return str(lvl)
+ return str(lvl - cls.BIP32_MAX_PATH_LEVEL) + "'"
+
+ def path_repr_to_path(self, pathstr):
+ spath = pathstr.split('/')
+ assert len(spath) > 0
+ if spath[0] != 'm':
+ raise WalletError("Not a valid wallet path: {}".format(pathstr))
+
+ def conv_level(lvl):
+ if lvl[-1] == "'":
+ return self._harden_path_level(int(lvl[:-1]))
+ return int(lvl)
+
+ return tuple(chain((self._key_ident,), map(conv_level, spath[1:])))
+
+ def _get_mixdepth_from_path(self, path):
+ if not self._is_my_bip32_path(path):
+ raise WalletInvalidPath(path)
+
+ return path[len(self._get_bip32_base_path())]
+
+ def _get_key_from_path(self, path,
+ validate_cache: bool = False):
+ raise NotImplementedError()
+
+ async def _get_keypair_from_path(self, path,
+ validate_cache: bool = False):
+ if not self._is_my_bip32_path(path):
+ return await super()._get_keypair_from_path(path,
+ validate_cache=validate_cache)
+ cache = self._get_cache_for_path(path)
+ privkey = None
+ pubkey = cache.get(b'P')
+ if pubkey is None or validate_cache:
+ new_pubkey, _ = await self._get_pubkey_from_path(
+ path, validate_cache=validate_cache)
+ if pubkey is None:
+ cache[b'P'] = pubkey = new_pubkey
+ elif pubkey != new_pubkey:
+ raise WalletCacheValidationFailed()
+ return privkey, pubkey, self._ENGINE
+
+ def _get_cache_keys_for_path(self, path):
+ if not self._is_my_bip32_path(path):
+ return super()._get_cache_keys_for_path(path)
+ return path[:1] + tuple([self._path_level_to_repr(lvl).encode('ascii')
+ for lvl in path[1:]])
+
+ def _is_my_bip32_path(self, path):
+ return len(path) > 0 and path[0] == self._key_ident
+
+ async def is_standard_wallet_script(self, path):
+ return self._is_my_bip32_path(path)
+
+ async def get_new_script(self, mixdepth, address_type,
+ validate_cache: bool = True):
+ if self.disable_new_scripts:
+ raise RuntimeError("Obtaining new wallet addresses "
+ + "disabled, due to nohistory mode")
+ index = self._index_cache[mixdepth][address_type]
+ return await self.get_script(mixdepth, address_type, index,
+ validate_cache=validate_cache)
+
+ def _set_index_cache(self, mixdepth, address_type, index):
+ """ Ensures that any update to index_cache dict only applies
+ to valid address types.
+ """
+ assert address_type in self._get_supported_address_types()
+ self._index_cache[mixdepth][address_type] = index
+
+ @deprecated
+ def get_key(self, mixdepth, address_type, index):
+ raise NotImplementedError()
+
+ def get_bip32_priv_export(self, mixdepth=None, address_type=None):
+ path = self._get_bip32_export_path(mixdepth, address_type)
+ return self._ENGINE.derive_bip32_priv_export(self._master_key, path)
+
+ def get_bip32_pub_export(self, mixdepth=None, address_type=None):
+ path = self._get_bip32_export_path(mixdepth, address_type)
+ return self._ENGINE.derive_bip32_pub_export(self._master_key, path)
+
+ def _get_bip32_export_path(self, mixdepth=None, address_type=None):
+ if mixdepth is None:
+ assert address_type is None
+ path = tuple()
+ else:
+ assert 0 <= mixdepth <= self.max_mixdepth
+ if address_type is None:
+ path = (self._get_bip32_mixdepth_path_level(mixdepth),)
+ else:
+ path = (self._get_bip32_mixdepth_path_level(mixdepth), address_type)
+
+ return tuple(chain(self._get_bip32_base_path(), path))
+
+ def _get_bip32_base_path(self):
+ return self._key_ident,
+
+ @classmethod
+ def _get_bip32_mixdepth_path_level(cls, mixdepth):
+ return mixdepth
+
+ def get_next_unused_index(self, mixdepth, address_type):
+ assert 0 <= mixdepth <= self.max_mixdepth
+
+ if (self._index_cache[mixdepth][address_type] >=
+ self.BIP32_MAX_PATH_LEVEL):
+ raise WalletError("All addresses used up, cannot "
+ "generate new ones.")
+
+ return self._index_cache[mixdepth][address_type]
+
+ def get_mnemonic_words(self):
+ return ' '.join(mn_encode(hexlify(self._entropy).decode('ascii'))), None
+
+ @classmethod
+ def entropy_from_mnemonic(cls, seed):
+ words = seed.split()
+ if len(words) != 12:
+ raise WalletError("Seed phrase must consist of exactly 12 words.")
+
+ return unhexlify(mn_decode(words))
+
+ def get_wallet_id(self):
+ return hexlify(self._key_ident).decode('ascii')
+
+ def set_next_index(self, mixdepth, address_type, index, force=False):
+ if not (force or index <= self._index_cache[mixdepth][address_type]):
+ raise Exception("cannot advance index without force=True")
+ self._set_index_cache(mixdepth, address_type, index)
+
+ def get_details(self, path):
+ if not self._is_my_bip32_path(path):
+ raise Exception("path does not belong to wallet")
+ return self._get_mixdepth_from_path(path), path[-2], path[-1]
+
+
+class BIP32PurposedFrostMixin(BIP32FrostMixin):
+
+ _PURPOSE = 2**31 + 86
+
+ def _get_bip32_base_path(self):
+ return self._key_ident, self._PURPOSE,\
+ self._ENGINE.BIP44_COIN_TYPE
+
+ @classmethod
+ def _get_bip32_mixdepth_path_level(cls, mixdepth):
+ assert 0 <= mixdepth < 2**31
+ return cls._harden_path_level(mixdepth)
+
+ def _get_mixdepth_from_path(self, path):
+ if not self._is_my_bip32_path(path):
+ raise WalletInvalidPath(path)
+
+ return path[len(self._get_bip32_base_path())] - 2**31
+
+
+class FrostWallet(BIP39WalletMixin, BIP32PurposedFrostMixin):
+
+ _ENGINE = ENGINES[TYPE_P2TR_FROST]
+ _STORAGE_HOSTSECKEY_KEY = b'hostseckey'
+ TYPE = TYPE_P2TR_FROST
+
+ def __init__(self, storage, dkg_storage, recovery_storage, **kwargs):
+ self._dkg_storage = dkg_storage
+ self._recovery_storage = recovery_storage
+ self._hostseckey = None
+ self._dkg = None
+ self.client_factory = None
+ self.ipc_client = None
+ super().__init__(storage, **kwargs)
+
+ async def async_init(self, storage, **kwargs):
+ await super().async_init(storage, **kwargs)
+ assert self._hostseckey is not None
+ if self._dkg_storage is None or self._recovery_storage is None:
+ assert self._dkg is None
+ return
+ assert self._dkg is not None
+
+ @property
+ def dkg(self):
+ return self._dkg
+
+ def set_client_factory(self, client_factory):
+ self.client_factory = client_factory
+
+ def set_ipc_client(self, ipc_client):
+ self.ipc_client = ipc_client
+
+ @classmethod
+ def initialize(cls, storage, dkg_storage, recovery_storage, network,
+ max_mixdepth=2, timestamp=None, entropy=None,
+ entropy_extension=None, write=True, **kwargs):
+ super(FrostWallet, cls).initialize(
+ storage, network, max_mixdepth, timestamp,
+ entropy=entropy, entropy_extension=None, write=False, **kwargs)
+
+ entropy = storage.data[cls._STORAGE_ENTROPY_KEY]
+ hostseckey = btc.hostseckey_from_entropy(entropy)
+ storage.data[cls._STORAGE_HOSTSECKEY_KEY] = hostseckey
+
+ if dkg_storage.data != {}:
+ # prevent accidentally overwriting existing wallet
+ raise WalletError("Refusing to initialize wallet in non-empty "
+ "dkg storage.")
+
+ if recovery_storage.data != {}:
+ # prevent accidentally overwriting existing wallet
+ raise WalletError("Refusing to initialize wallet in non-empty "
+ "recovery storage.")
+
+ bnetwork = network.encode('ascii')
+ if storage.data[b'network'] != bnetwork:
+ raise WalletError(f'Refusing to initialize wallet with wrong '
+ f'recovery storage network: {network}.')
+
+ if storage.data[b'wallet_type'] != cls.TYPE:
+ raise WalletError(f'Refusing to initialize wallet with wrong '
+ f'recovery storage wallet type: {cls.TYPE}.')
+
+ if not timestamp:
+ timestamp = datetime.now().strftime('%Y/%m/%d %H:%M:%S')
+
+ dkg_storage.data[b'network'] = bnetwork
+ dkg_storage.data[b'created'] = timestamp.encode('ascii')
+ dkg_storage.data[b'wallet_type'] = cls.TYPE
+
+ recovery_storage.data[b'network'] = bnetwork
+ recovery_storage.data[b'created'] = timestamp.encode('ascii')
+ recovery_storage.data[b'wallet_type'] = cls.TYPE
+
+ DKGManager.initialize(dkg_storage, recovery_storage)
+
+ if write:
+ storage.save()
+ dkg_storage.save()
+ recovery_storage.save()
+
+ async def _load_storage(self, load_cache: bool = True) -> None:
+ await super()._load_storage(load_cache=load_cache)
+ self._load_dkg_storage()
+ self._load_recovery_storage()
+ self._hostseckey = self._storage.data[self._STORAGE_HOSTSECKEY_KEY]
+ if self._dkg_storage is None or self._recovery_storage is None:
+ return
+ self._dkg = DKGManager(self, self._dkg_storage, self._recovery_storage)
+
+ def _load_dkg_storage(self) -> None:
+ if self._dkg_storage is None:
+ return
+ dkgs_wallet_type = self._dkg_storage.data[b'wallet_type']
+ if dkgs_wallet_type != self.TYPE:
+ raise WalletError(f'Wrong class to initialize dkg storage '
+ f'of type {dkgs_wallet_type}.')
+ dkgs_network = self._recovery_storage.data[b'network'].decode('ascii')
+ if (dkgs_network != self.network):
+ raise WalletError(f'Wrong network to initialize dkg storage '
+ f'of {dkgs_network} network.')
+
+ def _load_recovery_storage(self) -> None:
+ if self._recovery_storage is None:
+ return
+ rs_wallet_type = self._recovery_storage.data[b'wallet_type']
+ if rs_wallet_type != self.TYPE:
+ raise WalletError(f'Wrong class to initialize recovery storage '
+ f'of type {rs_wallet_type}.')
+ rs_network = self._recovery_storage.data[b'network'].decode('ascii')
+ if (rs_network != self.network):
+ raise WalletError(f'Wrong network to initialize recovery storage '
+ f'of {rs_network} network.')
+
+ def save(self):
+ if self._dkg is not None:
+ self._dkg.save()
+ super().save()
+
+ def close(self):
+ if self._dkg is not None:
+ self._dkg_storage.close()
+ self._recovery_storage.close()
+ super().close()
+
+ async def _get_keypair_from_path(self, path,
+ validate_cache: bool = False):
+ if not self._is_my_bip32_path(path):
+ return await super()._get_keypair_from_path(path,
+ validate_cache=validate_cache)
+ cache = self._get_cache_for_path(path)
+ privkey = None
+ pubkey = cache.get(b'P')
+ if pubkey is None or validate_cache:
+ mixdepth, address_type, index = self.get_details(path)
+ new_pubkey = await self.ipc_client.get_dkg_pubkey(
+ mixdepth, address_type, index)
+ if new_pubkey is None:
+ raise Exception(f'_get_pubkey_from_path: pubkey not found'
+ f' for ({mixdepth}, {address_type}, {index})')
+ if pubkey is None:
+ cache[b'P'] = pubkey = new_pubkey
+ elif pubkey != new_pubkey:
+ raise WalletCacheValidationFailed()
+ return privkey, pubkey, self._ENGINE
+
+
class FidelityBondWatchonlyWallet(FidelityBondMixin, BIP84Wallet):
TYPE = TYPE_WATCHONLY_FIDELITY_BONDS
_ENGINE = ENGINES[TYPE_WATCHONLY_P2WPKH]
@@ -2918,14 +3813,14 @@ class FidelityBondWatchonlyWallet(FidelityBondMixin, BIP84Wallet):
validate_cache: bool = False):
raise WalletCannotGetPrivateKeyFromWatchOnly()
- def _get_keypair_from_path(self, path,
+ async def _get_keypair_from_path(self, path,
validate_cache: bool = False):
raise WalletCannotGetPrivateKeyFromWatchOnly()
- def _get_pubkey_from_path(self, path,
+ async def _get_pubkey_from_path(self, path,
validate_cache: bool = False):
if not self._is_my_bip32_path(path):
- return super()._get_pubkey_from_path(path,
+ return await super()._get_pubkey_from_path(path,
validate_cache=validate_cache)
if self.is_timelocked_path(path):
key_path = path[:-1]
@@ -2952,13 +3847,6 @@ class FidelityBondWatchonlyWallet(FidelityBondMixin, BIP84Wallet):
return pubkey, self._ENGINE
-class FrostWallet(object):
-
- _PURPOSE = 2**31 + 86
- _ENGINE = ENGINES[TYPE_P2TR]
- TYPE = TYPE_P2TR_FROST
-
-
WALLET_IMPLEMENTATIONS = {
LegacyWallet.TYPE: LegacyWallet,
SegwitLegacyWallet.TYPE: SegwitLegacyWallet,
diff --git a/src/jmclient/wallet_rpc.py b/src/jmclient/wallet_rpc.py
index 34ab553..e3df708 100644
--- a/src/jmclient/wallet_rpc.py
+++ b/src/jmclient/wallet_rpc.py
@@ -23,9 +23,13 @@ from jmclient import Taker, jm_single, \
get_schedule, get_tumbler_parser, schedule_to_text, \
tumbler_filter_orders_callback, tumbler_taker_finished_update, \
validate_address, FidelityBondMixin, BaseWallet, WalletError, \
- ScheduleGenerationErrorNoFunds, BIP39WalletMixin, auth, wallet_signmessage
+ ScheduleGenerationErrorNoFunds, BIP39WalletMixin, auth, \
+ wallet_signmessage, FrostWallet
from jmbase.support import get_log, utxostr_to_utxo, JM_CORE_VERSION
+from .frost_ipc import FrostIPCClient
+
+
jlog = get_log()
api_version_string = "/api/v1"
@@ -192,7 +196,9 @@ class JMWalletDaemon(Service):
"tx_fees")
def get_client_factory(self):
- return JMClientProtocolFactory(self.taker)
+ cfactory = JMClientProtocolFactory(self.taker)
+ wallet = self.services["wallet"]
+ return cfactory
def activate_coinjoin_state(self, state):
""" To be set when a maker or taker
@@ -656,7 +662,7 @@ class JMWalletDaemon(Service):
)
@app.route('/wallet//display', methods=['GET'])
- def displaywallet(self, request, walletname):
+ async def displaywallet(self, request, walletname):
print_req(request)
self.check_cookie(request)
if not self.services["wallet"]:
@@ -666,7 +672,8 @@ class JMWalletDaemon(Service):
jlog.warn("called displaywallet with wrong wallet")
raise InvalidRequestFormat()
else:
- walletinfo = wallet_display(self.services["wallet"], False, jsonified=True)
+ walletinfo = await wallet_display(
+ self.services["wallet"], False, jsonified=True)
return make_jmwalletd_response(request, walletname=walletname, walletinfo=walletinfo)
@app.route('/wallet//rescanblockchain/', methods=['GET'])
@@ -787,7 +794,7 @@ class JMWalletDaemon(Service):
)
@app.route('/wallet//taker/direct-send', methods=['POST'])
- def directsend(self, request, walletname):
+ async def directsend(self, request, walletname):
""" Use the contents of the POST body to do a direct send from
the active wallet at the chosen mixdepth.
"""
@@ -818,13 +825,14 @@ class JMWalletDaemon(Service):
raise InvalidRequestFormat()
try:
- tx = direct_send(self.services["wallet"],
- int(payment_info_json["mixdepth"]),
- [(
- payment_info_json["destination"],
- int(payment_info_json["amount_sats"])
- )],
- return_transaction=True, answeryes=True)
+ tx = await direct_send(
+ self.services["wallet"],
+ int(payment_info_json["mixdepth"]),
+ [(
+ payment_info_json["destination"],
+ int(payment_info_json["amount_sats"])
+ )],
+ return_transaction=True, answeryes=True)
jm_single().config.set("POLICY", "tx_fees",
self.default_policy_tx_fees)
except AssertionError:
@@ -847,7 +855,7 @@ class JMWalletDaemon(Service):
txinfo=human_readable_transaction(tx, False))
@app.route('/wallet//maker/start', methods=['POST'])
- def start_maker(self, request, walletname):
+ async def start_maker(self, request, walletname):
""" Use the configuration in the POST body to start the yield generator:
"""
print_req(request)
@@ -894,7 +902,7 @@ class JMWalletDaemon(Service):
raise ServiceAlreadyStarted()
# don't even start up the service if there aren't any coins
# to offer:
- def setup_sanitycheck_balance():
+ async def setup_sanitycheck_balance():
# note: this will only be non-zero if coins are confirmed.
# note: a call to start_maker necessarily is after a successful
# sync has already happened (this is different from CLI yg).
@@ -910,7 +918,7 @@ class JMWalletDaemon(Service):
# We must also not start if the only coins available are of
# the TL type *even* if the TL is expired. This check is done
# here early, as above, to avoid the maker service starting.
- utxos = self.services["wallet"].get_all_utxos()
+ utxos = await self.services["wallet"].get_all_utxos()
# remove any TL type:
utxos = [u for u in utxos.values() if not \
FidelityBondMixin.is_timelocked_path(u["path"])]
@@ -997,7 +1005,7 @@ class JMWalletDaemon(Service):
already_locked=already_locked)
@app.route('/wallet/create', methods=["POST"])
- def createwallet(self, request):
+ async def createwallet(self, request):
print_req(request)
# we only handle one wallet at a time;
# if there is a currently unlocked wallet,
@@ -1011,7 +1019,7 @@ class JMWalletDaemon(Service):
wallet_cls = self.get_wallet_cls_from_type(
request_data["wallettype"])
try:
- wallet = create_wallet(self.get_wallet_name_from_req(
+ wallet = await create_wallet(self.get_wallet_name_from_req(
request_data["walletname"]),
request_data["password"].encode("ascii"),
4, wallet_cls=wallet_cls)
@@ -1028,7 +1036,7 @@ class JMWalletDaemon(Service):
seedphrase=seed)
@app.route('/wallet/recover', methods=["POST"])
- def recoverwallet(self, request):
+ async def recoverwallet(self, request):
print_req(request)
# we only handle one wallet at a time;
# if there is a currently unlocked wallet,
@@ -1052,7 +1060,8 @@ class JMWalletDaemon(Service):
# should only occur if the seedphrase is not valid BIP39:
raise InvalidRequestFormat()
try:
- wallet = create_wallet(self.get_wallet_name_from_req(
+ wallet = await create_wallet(
+ self.get_wallet_name_from_req(
request_data["walletname"]),
request_data["password"].encode("ascii"),
4, wallet_cls=wallet_cls, entropy=entropy)
@@ -1067,7 +1076,7 @@ class JMWalletDaemon(Service):
seedphrase=seedphrase)
@app.route('/wallet//unlock', methods=['POST'])
- def unlockwallet(self, request, walletname):
+ async def unlockwallet(self, request, walletname):
""" If a user succeeds in authenticating and opening a
wallet, we start the corresponding wallet service.
Notice that in the case the user fails for any reason,
@@ -1095,7 +1104,7 @@ class JMWalletDaemon(Service):
if walletname == self.wallet_name:
try:
# returned wallet object is ditched:
- open_test_wallet_maybe(
+ await open_test_wallet_maybe(
wallet_path, walletname, 4,
password=password.encode("utf-8"),
ask_for_password=False,
@@ -1120,11 +1129,15 @@ class JMWalletDaemon(Service):
# This is a different wallet than the one currently open;
# try to open it, then initialize the service(s):
try:
- wallet = open_test_wallet_maybe(
+ wallet = await open_test_wallet_maybe(
wallet_path, walletname, 4,
password=password.encode("utf-8"),
ask_for_password=False,
gap_limit = jm_single().config.getint("POLICY", "gaplimit"))
+ if isinstance(wallet, FrostWallet):
+ ipc_client = FrostIPCClient(wallet)
+ await ipc_client.async_init()
+ wallet.set_ipc_client(ipc_client)
except StoragePasswordError:
raise InvalidCredentials()
except RetryableStorageError:
@@ -1156,7 +1169,7 @@ class JMWalletDaemon(Service):
#route to get external address for deposit
@app.route('/wallet//address/new/', methods=['GET'])
- def getaddress(self, request, walletname, mixdepth):
+ async def getaddress(self, request, walletname, mixdepth):
self.check_cookie(request)
if not self.services["wallet"]:
raise NoWalletFound()
@@ -1166,18 +1179,18 @@ class JMWalletDaemon(Service):
mixdepth = int(mixdepth)
except ValueError:
raise InvalidRequestFormat()
- address = self.services["wallet"].get_external_addr(mixdepth)
+ address = await self.services["wallet"].get_external_addr(mixdepth)
return make_jmwalletd_response(request, address=address)
@app.route('/wallet//address/timelock/new/', methods=['GET'])
- def gettimelockaddress(self, request, walletname, lockdate):
+ async def gettimelockaddress(self, request, walletname, lockdate):
self.check_cookie(request)
if not self.services["wallet"]:
raise NoWalletFound()
if not self.wallet_name == walletname:
raise InvalidRequestFormat()
try:
- timelockaddress = wallet_gettimelockaddress(
+ timelockaddress = await wallet_gettimelockaddress(
self.services["wallet"].wallet, lockdate)
except Exception:
raise InvalidRequestFormat()
@@ -1271,7 +1284,7 @@ class JMWalletDaemon(Service):
#route to list utxos
@app.route('/wallet//utxos',methods=['GET'])
- def listutxos(self, request, walletname):
+ async def listutxos(self, request, walletname):
self.check_cookie(request)
if not self.services["wallet"]:
raise NoWalletFound()
@@ -1279,7 +1292,8 @@ class JMWalletDaemon(Service):
raise InvalidRequestFormat()
# note: the output of `showutxos` is already a string for CLI;
# but we return json:
- utxos = json.loads(wallet_showutxos(self.services["wallet"], False))
+ utxos = json.loads(
+ await wallet_showutxos(self.services["wallet"], False))
utxos_response = self.get_listutxos_response(utxos)
return make_jmwalletd_response(request, utxos=utxos_response)
@@ -1477,7 +1491,6 @@ class JMWalletDaemon(Service):
raise ServiceAlreadyStarted()
self.tumbler_options = tumbler_options
-
self.taker = Taker(self.services["wallet"],
schedule,
max_cj_fee=max_cj_fee,
diff --git a/src/jmclient/wallet_service.py b/src/jmclient/wallet_service.py
index ae0445a..11c9a2e 100644
--- a/src/jmclient/wallet_service.py
+++ b/src/jmclient/wallet_service.py
@@ -1,14 +1,15 @@
#! /usr/bin/env python
+import asyncio
import collections
import itertools
import time
-import sys
from typing import Dict, List, Optional, Set, Tuple
from decimal import Decimal
from copy import deepcopy
from twisted.internet import reactor
from twisted.internet import task
+from twisted.internet.defer import Deferred
from twisted.application.service import Service
from numbers import Integral
import jmbitcoin as btc
@@ -16,9 +17,10 @@ from jmclient.configure import jm_single, get_log
from jmclient.output import fmt_tx_data
from jmclient.blockchaininterface import (INF_HEIGHT, BitcoinCoreInterface,
BitcoinCoreNoHistoryInterface)
-from jmclient.wallet import FidelityBondMixin, BaseWallet, TaprootWallet
+from jmclient.wallet import (FidelityBondMixin, BaseWallet, TaprootWallet,
+ FrostWallet)
from jmbase import (stop_reactor, hextobin, utxo_to_utxostr,
- jmprint, EXIT_SUCCESS, EXIT_FAILURE)
+ twisted_sys_exit, jmprint, EXIT_SUCCESS, EXIT_FAILURE)
from .descriptor import descsum_create
"""Wallet service
@@ -43,9 +45,10 @@ class WalletService(Service):
# the JM wallet object.
self.bci = jm_single().bc_interface
- # main loop used to check for transactions, instantiated
+ # main task used to check for transactions, instantiated
# after wallet is synced:
- self.monitor_loop = None
+ self.service_task = None
+ self.monitor_task = None
self.wallet = wallet
self.synced = False
@@ -139,7 +142,7 @@ class WalletService(Service):
Here wallet sync.
"""
super().startService()
- self.request_sync_wallet()
+ self.service_task = asyncio.create_task(self.request_sync_wallet())
def stopService(self):
""" Encapsulates shut down actions.
@@ -147,8 +150,12 @@ class WalletService(Service):
should *not* be restarted, instead a new
WalletService instance should be created.
"""
- if self.monitor_loop and self.monitor_loop.running:
- self.monitor_loop.stop()
+ if self.monitor_task and not self.monitor_task.done():
+ self.monitor_task.cancel()
+ if self.service_task and not self.service_task.done():
+ self.service_task.cancel()
+ self.monitor_task = None
+ self.service_task = None
self.wallet.close()
super().stopService()
@@ -167,13 +174,13 @@ class WalletService(Service):
"""
self.restart_callback = callback
- def request_sync_wallet(self):
+ async def request_sync_wallet(self):
""" Ensures wallet sync is complete
before the main event loop starts.
"""
if self.bci is not None:
- d = task.deferLater(reactor, 0.0, self.sync_wallet)
- d.addCallback(self.start_wallet_monitoring)
+ syncresult = await self.sync_wallet()
+ await self.start_wallet_monitoring(syncresult)
def register_callbacks(self, callbacks, txinfo, cb_type="all"):
""" Register callbacks that will be called by the
@@ -214,7 +221,7 @@ class WalletService(Service):
assert False, "Invalid argument: " + cb_type
- def start_wallet_monitoring(self, syncresult):
+ async def start_wallet_monitoring(self, syncresult):
""" Once the initialization of the service
(currently, means: wallet sync) is complete,
we start the main monitoring jobs of the
@@ -230,9 +237,16 @@ class WalletService(Service):
reactor.stop()
return
jlog.info("Starting transaction monitor in walletservice")
- self.monitor_loop = task.LoopingCall(
- self.transaction_monitor)
- self.monitor_loop.start(5.0)
+
+ async def monitor_task():
+ while True:
+ try:
+ await self.transaction_monitor()
+ await asyncio.sleep(5)
+ except asyncio.CancelledError:
+ break
+
+ self.monitor_task = asyncio.create_task(monitor_task())
def import_non_wallet_address(self, address):
""" Used for keeping track of transactions which
@@ -309,7 +323,7 @@ class WalletService(Service):
last = False
yield tx
- def transaction_monitor(self):
+ async def transaction_monitor(self):
"""Keeps track of any changes in the wallet (new transactions).
Intended to be run as a twisted task.LoopingCall so that this
Service is constantly in near-realtime sync with the blockchain.
@@ -354,14 +368,15 @@ class WalletService(Service):
self.bci.get_deser_from_gettransaction(res)
if txd is None:
continue
- removed_utxos, added_utxos = self.wallet.process_new_tx(txd, height)
+ removed_utxos, added_utxos = await self.wallet.process_new_tx(
+ txd, height)
if txid not in self.processed_txids:
# apply checks to disable/freeze utxos to reused addrs if needed:
self.check_for_reuse(added_utxos)
# TODO note that this log message will be missed if confirmation
# is absurdly fast, this is considered acceptable compared with
# additional complexity.
- self.log_new_tx(removed_utxos, added_utxos, txid)
+ await self.log_new_tx(removed_utxos, added_utxos, txid)
self.processed_txids.add(txid)
# first fire 'all' type callbacks, irrespective of if the
@@ -375,7 +390,10 @@ class WalletService(Service):
for f in self.callbacks["all"]:
# note we need no return value as we will never
# remove these from the list
- f(txd, txid)
+ if asyncio.iscoroutine(f):
+ await f(txd, txid)
+ else:
+ f(txd, txid)
# txid is not always available at the time of callback registration.
# Migrate any callbacks registered under the provisional key, and
@@ -401,9 +419,14 @@ class WalletService(Service):
if len(added_utxos) > 0 or len(removed_utxos) > 0 \
or txid in self.active_txs:
if confs == 0:
- callbacks = [f for f in
- self.callbacks["unconfirmed"].pop(txid, [])
- if not f(txd, txid)]
+ callbacks = []
+ for f in self.callbacks["unconfirmed"].pop(txid, []):
+ if asyncio.iscoroutine(f):
+ if not await f(txd, txid):
+ callbacks.append(f)
+ else:
+ if not f(txd, txid):
+ callbacks.append(f)
if callbacks:
self.callbacks["unconfirmed"][txid] = callbacks
else:
@@ -414,9 +437,14 @@ class WalletService(Service):
# the height of the utxo in UtxoManager
self.active_txs[txid] = txd
elif confs > 0:
- callbacks = [f for f in
- self.callbacks["confirmed"].pop(txid, [])
- if not f(txd, txid, confs)]
+ callbacks = []
+ for f in self.callbacks["confirmed"].pop(txid, []):
+ if asyncio.iscoroutine(f):
+ if not await f(txd, txid, confs):
+ callbacks.append(f)
+ else:
+ if not f(txd, txid, confs):
+ callbacks.append(f)
if callbacks:
self.callbacks["confirmed"][txid] = callbacks
else:
@@ -458,25 +486,26 @@ class WalletService(Service):
# processed and so do nothing.
return True
- def log_new_tx(self, removed_utxos, added_utxos, txid):
+ async def log_new_tx(self, removed_utxos, added_utxos, txid):
""" Changes to the wallet are logged at INFO level by
the WalletService.
"""
- def report_changed(x, utxos):
+ async def report_changed(x, utxos):
if len(utxos.keys()) > 0:
jlog.info(x + ' utxos=\n{}'.format('\n'.join(
- '{} - {}'.format(utxo_to_utxostr(u)[1],
- fmt_tx_data(tx_data, self)) for u,
- tx_data in utxos.items())))
+ ['{} - {}'.format(
+ utxo_to_utxostr(u)[1],
+ await fmt_tx_data(tx_data, self))
+ for u, tx_data in utxos.items()])))
- report_changed("Removed", removed_utxos)
- report_changed("Added", added_utxos)
+ await report_changed("Removed", removed_utxos)
+ await report_changed("Added", added_utxos)
""" Wallet syncing code
"""
- def sync_wallet(self, fast=True):
+ async def sync_wallet(self, fast=True):
""" Syncs wallet; note that if slow sync
requires multiple rounds this must be called
until self.synced is True.
@@ -489,9 +518,9 @@ class WalletService(Service):
if self.synced:
return True
if fast:
- self.sync_wallet_fast()
+ await self.sync_wallet_fast()
else:
- self.sync_addresses()
+ await self.sync_addresses()
self.sync_unspent()
# Don't attempt updates on transactions that existed
# before startup
@@ -502,22 +531,22 @@ class WalletService(Service):
self.bci.set_wallet_no_history(self.wallet)
return self.synced
- def resync_wallet(self, fast=True):
+ async def resync_wallet(self, fast=True):
""" The self.synced state is generally
updated to True, once, at the start of
a run of a particular program. Here we
can manually force re-sync.
"""
self.synced = False
- self.sync_wallet(fast=fast)
+ await self.sync_wallet(fast=fast)
- def sync_wallet_fast(self):
+ async def sync_wallet_fast(self):
"""Exploits the fact that given an index_cache,
all addresses necessary should be imported, so we
can just list all used addresses to find the right
index values.
"""
- self.sync_addresses_fast()
+ await self.sync_addresses_fast()
self.sync_unspent()
def has_address_been_used(self, address):
@@ -545,7 +574,7 @@ class WalletService(Service):
used_addresses.add(addr_info[0])
self.used_addresses = used_addresses
- def sync_addresses_fast(self):
+ async def sync_addresses_fast(self):
"""Locates all used addresses in the account (whether spent or
unspent outputs), and then, assuming that all such usages must be
related to our wallet, calculates the correct wallet indices and
@@ -562,7 +591,7 @@ class WalletService(Service):
# delegate inital address import to sync_addresses
# this should be fast because "getaddressesbyaccount" should return
# an empty list in this case
- self.sync_addresses()
+ await self.sync_addresses()
self.synced = True
return
@@ -592,9 +621,15 @@ class WalletService(Service):
# showing imported addresses. Hence the gap-limit import at the end
# to ensure this is always true.
remaining_used_addresses = self.used_addresses.copy()
- addresses, saved_indices = self.collect_addresses_init()
- for addr in addresses:
- remaining_used_addresses.discard(addr)
+ if isinstance(self.wallet, (TaprootWallet, FrostWallet)):
+ pubkeys, saved_indices = await self.collect_pubkeys_init()
+ for pubkey in pubkeys:
+ addr = self.wallet.pubkey_to_addr(pubkey)
+ remaining_used_addresses.discard(addr)
+ else:
+ addresses, saved_indices = await self.collect_addresses_init()
+ for addr in addresses:
+ remaining_used_addresses.discard(addr)
BATCH_SIZE = 100
MAX_ITERATIONS = 20
@@ -602,12 +637,20 @@ class WalletService(Service):
for j in range(MAX_ITERATIONS):
if not remaining_used_addresses:
break
- gap_addrs = self.collect_addresses_gap(gap_limit=BATCH_SIZE)
# note: gap addresses *not* imported here; we are still trying
# to find the highest-index used address, and assume that imports
# are up to that index (at least) - see above main rationale.
- for addr in gap_addrs:
- remaining_used_addresses.discard(addr)
+ if isinstance(self.wallet, (TaprootWallet, FrostWallet)):
+ gap_pubkeys = await self.collect_pubkeys_gap(
+ gap_limit=BATCH_SIZE)
+ for pubkey in gap_pubkeys:
+ addr = self.wallet.pubkey_to_addr(pubkey)
+ remaining_used_addresses.discard(addr)
+ else:
+ gap_addrs = await self.collect_addresses_gap(
+ gap_limit=BATCH_SIZE)
+ for addr in gap_addrs:
+ remaining_used_addresses.discard(addr)
# increase wallet indices for next iteration
for md in current_indices:
@@ -627,8 +670,8 @@ class WalletService(Service):
# we ensure that all addresses that will be displayed (see wallet_utils.py,
# function wallet_display()) are imported by importing gap limit beyond current
# index:
- if isinstance(self.wallet, TaprootWallet):
- pubkeys = self.collect_pubkeys_gap()
+ if isinstance(self.wallet, (TaprootWallet, FrostWallet)):
+ pubkeys = await self.collect_pubkeys_gap()
desc_scripts = [f'tr({bytes(P)[1:].hex()})' for P in pubkeys]
descriptors = set()
for desc in desc_scripts:
@@ -639,7 +682,7 @@ class WalletService(Service):
self.restart_callback)
else:
self.bci.import_addresses(
- self.collect_addresses_gap(),
+ await self.collect_addresses_gap(),
self.get_wallet_name(),
self.restart_callback)
@@ -663,7 +706,7 @@ class WalletService(Service):
#theres also a sys.exit() in BitcoinCoreInterface.import_addresses()
#perhaps have sys.exit() placed inside the restart_cb that only
# CLI scripts will use
- if self.bci.__class__ == BitcoinCoreInterface:
+ if isinstance(self.bci, BitcoinCoreInterface):
#Exit conditions cannot be included in tests
restart_msg = ("Use `bitcoin-cli rescanblockchain` if you're "
"recovering an existing wallet from backup seed\n"
@@ -672,7 +715,7 @@ class WalletService(Service):
restart_cb(restart_msg)
else:
jmprint(restart_msg, "important")
- sys.exit(EXIT_SUCCESS)
+ twisted_sys_exit(EXIT_SUCCESS)
def sync_burner_outputs(self, burner_txes):
mixdepth = FidelityBondMixin.FIDELITY_BOND_MIXDEPTH
@@ -760,7 +803,7 @@ class WalletService(Service):
return self.bci.get_block_height(self.bci.get_transaction(
txid)["blockhash"])
- def sync_addresses(self):
+ async def sync_addresses(self):
""" Triggered by use of --recoversync option in scripts,
attempts a full scan of the blockchain without assuming
anything about past usages of addresses (does not use
@@ -769,8 +812,8 @@ class WalletService(Service):
jlog.debug("requesting detailed wallet history")
wallet_name = self.get_wallet_name()
- if isinstance(self.wallet, TaprootWallet):
- pubkeys, saved_indices = self.collect_pubkeys_init()
+ if isinstance(self.wallet, (TaprootWallet, FrostWallet)):
+ pubkeys, saved_indices = await self.collect_pubkeys_init()
desc_scripts = [f'tr({bytes(P)[1:].hex()})' for P in pubkeys]
descriptors= set()
for desc in desc_scripts:
@@ -778,7 +821,7 @@ class WalletService(Service):
import_needed = self.bci.import_descriptors_if_needed(
descriptors, wallet_name)
else:
- addresses, saved_indices = self.collect_addresses_init()
+ addresses, saved_indices = await self.collect_addresses_init()
import_needed = self.bci.import_addresses_if_needed(
addresses, wallet_name)
if import_needed:
@@ -820,18 +863,36 @@ class WalletService(Service):
gap_limit_used = not self.check_gap_indices(used_indices)
self.rewind_wallet_indices(used_indices, saved_indices)
- new_addresses = self.collect_addresses_gap()
- if self.bci.import_addresses_if_needed(new_addresses, wallet_name):
- jlog.debug("Syncing iteration finished, additional step required (more address import required)")
- self.synced = False
- self.display_rescan_message_and_system_exit(self.restart_callback)
- elif gap_limit_used:
- jlog.debug("Syncing iteration finished, additional step required (gap limit used)")
- self.synced = False
+ if isinstance(self.wallet, (TaprootWallet, FrostWallet)):
+ new_pubkeys = await self.collect_pubkeys_gap()
+ desc_scripts = [f'tr({bytes(P)[1:].hex()})' for P in new_pubkeys]
+ descriptors= set()
+ for desc in desc_scripts:
+ descriptors.add(f'{descsum_create(desc)}')
+ if self.bci.import_descriptors_if_needed(descriptors, wallet_name):
+ jlog.debug("Syncing iteration finished, additional step required (more pubkey import required)")
+ self.synced = False
+ self.display_rescan_message_and_system_exit(self.restart_callback)
+ elif gap_limit_used:
+ jlog.debug("Syncing iteration finished, additional step required (gap limit used)")
+ self.synced = False
+ else:
+ jlog.debug("Wallet successfully synced")
+ self.rewind_wallet_indices(used_indices, saved_indices)
+ self.synced = True
else:
- jlog.debug("Wallet successfully synced")
- self.rewind_wallet_indices(used_indices, saved_indices)
- self.synced = True
+ new_addresses = await self.collect_addresses_gap()
+ if self.bci.import_addresses_if_needed(new_addresses, wallet_name):
+ jlog.debug("Syncing iteration finished, additional step required (more address import required)")
+ self.synced = False
+ self.display_rescan_message_and_system_exit(self.restart_callback)
+ elif gap_limit_used:
+ jlog.debug("Syncing iteration finished, additional step required (gap limit used)")
+ self.synced = False
+ else:
+ jlog.debug("Wallet successfully synced")
+ self.rewind_wallet_indices(used_indices, saved_indices)
+ self.synced = True
def sync_unspent(self):
st = time.time()
@@ -855,7 +916,7 @@ class WalletService(Service):
if self.isRunning:
self.stopService()
stop_reactor()
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
wallet_name = self.get_wallet_name()
self.reset_utxos()
@@ -923,11 +984,11 @@ class WalletService(Service):
def save_wallet(self):
self.wallet.save()
- def get_utxos_by_mixdepth(self, include_disabled: bool = False,
- verbose: bool = False,
- includeconfs: bool = False,
- limit_mixdepth: Optional[int] = None
- ) -> collections.defaultdict:
+ async def get_utxos_by_mixdepth(self, include_disabled: bool = False,
+ verbose: bool = False,
+ includeconfs: bool = False,
+ limit_mixdepth: Optional[int] = None
+ ) -> collections.defaultdict:
""" Returns utxos by mixdepth in a dict, optionally including
information about how many confirmations each utxo has.
"""
@@ -944,7 +1005,7 @@ class WalletService(Service):
confs = self.current_blockheight - h + 1
ubym_conv[m][u]["confs"] = confs
return ubym_conv
- ubym = self.wallet.get_utxos_by_mixdepth(
+ ubym = await self.wallet.get_utxos_by_mixdepth(
include_disabled=include_disabled, includeheight=includeconfs,
limit_mixdepth=limit_mixdepth)
if not includeconfs:
@@ -958,15 +1019,16 @@ class WalletService(Service):
else:
return self.current_blockheight - minconfs + 1
- def select_utxos(self, mixdepth, amount, utxo_filter=None, select_fn=None,
- minconfs=None, includeaddr=False, require_auth_address=False):
+ async def select_utxos(self, mixdepth, amount, utxo_filter=None,
+ select_fn=None, minconfs=None, includeaddr=False,
+ require_auth_address=False):
""" Request utxos from the wallet in a particular mixdepth to satisfy
a certain total amount, optionally set the selector function (or use
the currently configured function set by the wallet, and optionally
require a minimum of minconfs confirmations (default none means
unconfirmed are allowed).
"""
- return self.wallet.select_utxos(
+ return await self.wallet.select_utxos(
mixdepth, amount, utxo_filter=utxo_filter, select_fn=select_fn,
maxheight=self.minconfs_to_maxheight(minconfs),
includeaddr=includeaddr, require_auth_address=require_auth_address)
@@ -993,17 +1055,17 @@ class WalletService(Service):
self.bci.import_descriptors(
[descriptor], self.wallet.get_wallet_name())
- def get_internal_addr(self, mixdepth):
- if isinstance(self.wallet, TaprootWallet):
- pubkey = self.wallet.get_internal_pubkey(mixdepth)
+ async def get_internal_addr(self, mixdepth):
+ if isinstance(self.wallet, (TaprootWallet, FrostWallet)):
+ pubkey = await self.wallet.get_internal_pubkey(mixdepth)
self.import_pubkey(pubkey)
return self.wallet.pubkey_to_addr(pubkey)
else:
- addr = self.wallet.get_internal_addr(mixdepth)
+ addr = await self.wallet.get_internal_addr(mixdepth)
self.import_addr(addr)
return addr
- def collect_addresses_init(self) -> Tuple[Set[str], Dict[int, List[int]]]:
+ async def collect_addresses_init(self) -> Tuple[Set[str], Dict[int, List[int]]]:
""" Collects the "current" set of addresses,
as defined by the indices recorded in the wallet's
index cache (persisted in the wallet file usually).
@@ -1019,25 +1081,30 @@ class WalletService(Service):
BaseWallet.ADDRESS_TYPE_INTERNAL):
next_unused = self.get_next_unused_index(md, address_type)
for index in range(next_unused):
- addresses.add(self.get_addr(md, address_type, index))
+ addresses.add(
+ await self.get_addr(md, address_type, index))
for index in range(self.gap_limit):
- addresses.add(self.get_new_addr(md, address_type,
- validate_cache=False))
+ addresses.add(
+ await self.get_new_addr(
+ md, address_type, validate_cache=False))
# reset the indices to the value we had before the
# new address calls:
self.set_next_index(md, address_type, next_unused)
saved_indices[md][address_type] = next_unused
# include any imported addresses
for path in self.yield_imported_paths(md):
- addresses.add(self.get_address_from_path(path))
+ addresses.add(await self.get_address_from_path(path))
if isinstance(self.wallet, FidelityBondMixin):
md = FidelityBondMixin.FIDELITY_BOND_MIXDEPTH
address_type = FidelityBondMixin.BIP32_TIMELOCK_ID
for timenumber in range(FidelityBondMixin.TIMENUMBER_COUNT):
- addresses.add(self.get_addr(md, address_type, timenumber))
+ addresses.add(
+ await self.get_addr(md, address_type, timenumber))
return addresses, saved_indices
- def collect_pubkeys_init(self) -> Tuple[Set[str], Dict[int, List[int]]]:
+ async def collect_pubkeys_init(
+ self
+ ) -> Tuple[Set[str], Dict[int, List[int]]]:
""" Collects the "current" set of pubkeys,
as defined by the indices recorded in the wallet's
index cache (persisted in the wallet file usually).
@@ -1053,17 +1120,19 @@ class WalletService(Service):
BaseWallet.ADDRESS_TYPE_INTERNAL):
next_unused = self.get_next_unused_index(md, address_type)
for index in range(next_unused):
- pubkeys.add(self.get_pubkey(md, address_type, index))
+ pubkey = await self.get_pubkey(md, address_type, index)
+ pubkeys.add(pubkey)
for index in range(self.gap_limit):
- pubkeys.add(self.get_new_pubkey(
- md, address_type, validate_cache=False))
+ pubkey = await self.get_new_pubkey(
+ md, address_type, validate_cache=False)
+ pubkeys.add(pubkey)
# reset the indices to the value we had before the
# new address calls:
self.set_next_index(md, address_type, next_unused)
saved_indices[md][address_type] = next_unused
return pubkeys, saved_indices
- def collect_addresses_gap(self, gap_limit=None):
+ async def collect_addresses_gap(self, gap_limit=None):
gap_limit = gap_limit or self.gap_limit
addresses = set()
for md in range(self.max_mixdepth + 1):
@@ -1071,12 +1140,13 @@ class WalletService(Service):
BaseWallet.ADDRESS_TYPE_EXTERNAL):
old_next = self.get_next_unused_index(md, address_type)
for index in range(gap_limit):
- addresses.add(self.get_new_addr(md, address_type,
- validate_cache=False))
+ addresses.add(
+ await self.get_new_addr(
+ md, address_type, validate_cache=False))
self.set_next_index(md, address_type, old_next)
return addresses
- def collect_pubkeys_gap(self, gap_limit=None):
+ async def collect_pubkeys_gap(self, gap_limit=None):
gap_limit = gap_limit or self.gap_limit
pubkeys = set()
for md in range(self.max_mixdepth + 1):
@@ -1084,18 +1154,19 @@ class WalletService(Service):
BaseWallet.ADDRESS_TYPE_EXTERNAL):
old_next = self.get_next_unused_index(md, address_type)
for index in range(gap_limit):
- pubkeys.add(self.get_new_pubkey(
- md, address_type, validate_cache=False))
+ pubkey = await self.get_new_pubkey(
+ md, address_type, validate_cache=False)
+ pubkeys.add(pubkey)
self.set_next_index(md, address_type, old_next)
return pubkeys
- def get_external_addr(self, mixdepth):
- if isinstance(self.wallet, TaprootWallet):
- pubkey = self.wallet.get_external_pubkey(mixdepth)
+ async def get_external_addr(self, mixdepth):
+ if isinstance(self.wallet, (TaprootWallet, FrostWallet)):
+ pubkey = await self.wallet.get_external_pubkey(mixdepth)
self.import_pubkey(pubkey)
return self.wallet.pubkey_to_addr(pubkey)
else:
- addr = self.wallet.get_external_addr(mixdepth)
+ addr = await self.wallet.get_external_addr(mixdepth)
self.import_addr(addr)
return addr
diff --git a/src/jmclient/wallet_utils.py b/src/jmclient/wallet_utils.py
index ed8f421..0b1a889 100644
--- a/src/jmclient/wallet_utils.py
+++ b/src/jmclient/wallet_utils.py
@@ -1,3 +1,4 @@
+import asyncio
import base64
import binascii
import json
@@ -10,22 +11,30 @@ from numbers import Integral
from collections import Counter, defaultdict
from itertools import islice, chain
from typing import Callable, Optional, Tuple, Union
+
+from twisted.internet import reactor, task
+
from jmclient import (get_network, WALLET_IMPLEMENTATIONS, Storage, podle,
- jm_single, WalletError, BaseWallet, VolatileStorage,
+ jm_single, WalletError, BaseWallet, VolatileStorage, DKGRecoveryStorage,
StoragePasswordError, is_taproot_mode, is_segwit_mode, SegwitLegacyWallet,
LegacyWallet, SegwitWallet, FidelityBondMixin, FidelityBondWatchonlyWallet,
- TaprootWallet, is_native_segwit_mode, load_program_config,
- add_base_options, check_regtest, JMClientProtocolFactory, start_reactor)
+ TaprootWallet, is_native_segwit_mode, load_program_config, is_frost_mode,
+ add_base_options, check_regtest, JMClientProtocolFactory, start_reactor,
+ FrostWallet, DKGStorage)
from jmclient.blockchaininterface import (BitcoinCoreInterface,
BitcoinCoreNoHistoryInterface)
from jmclient.wallet_service import WalletService
+from jmbase import stop_reactor
from jmbase.support import (get_password, jmprint, EXIT_FAILURE,
EXIT_ARGERROR, utxo_to_utxostr, hextobin, bintohex,
IndentedHelpFormatterWithNL, dict_factory,
- cli_prompt_user_yesno)
+ cli_prompt_user_yesno, twisted_sys_exit)
+from jmfrost.chilldkg_ref.chilldkg import hostpubkey_gen
+from .frost_clients import FROSTClient
+from .frost_ipc import FrostIPCServer, FrostIPCClient
from .cryptoengine import TYPE_P2PKH, TYPE_P2SH_P2WPKH, TYPE_P2WPKH, \
- TYPE_SEGWIT_WALLET_FIDELITY_BONDS, TYPE_P2TR
+ TYPE_SEGWIT_WALLET_FIDELITY_BONDS, TYPE_P2TR, TYPE_P2TR_FROST
from .output import fmt_utxo
import jmbitcoin as btc
from .descriptor import descsum_create
@@ -59,6 +68,14 @@ The method is one of the following:
-H and proof which is output of Bitcoin Core\'s RPC call gettxoutproof.
(createwatchonly) Create a watch-only fidelity bond wallet.
(setlabel) Set the label associated with the given address.
+(hostpubkey) display host public key
+(servefrost) run only as DKG/FROST counterparty
+(dkgrecover) Recovers a wallet DKG data from Recovery DKG File
+(dkgls) display FrostWallet dkg data
+(dkgrm) rm FrostWallet dkg data by session_id list
+(recdkgls) display Recovery DKG File data
+(recdkgrm) rm Recovery DKG File data by session_id list
+(testfrost) run only as test of FROST signing
"""
parser = OptionParser(usage='usage: %prog [options] [wallet file] [method] [args..]',
description=description, formatter=IndentedHelpFormatterWithNL())
@@ -406,15 +423,16 @@ def get_tx_info(txid: bytes, tx_cache: Optional[dict] = None) -> Tuple[
rpctx.get('blocktime', 0), rpctx_deser
-def get_imported_privkey_branch(wallet_service, m, showprivkey):
+async def get_imported_privkey_branch(wallet_service, m, showprivkey):
entries = []
balance_by_script = defaultdict(int)
- for data in wallet_service.get_utxos_at_mixdepth(m,
- include_disabled=True).values():
+ _utxos = await wallet_service.get_utxos_at_mixdepth(
+ m, include_disabled=True)
+ for data in _utxos.values():
balance_by_script[data['script']] += data['value']
for path in wallet_service.yield_imported_paths(m):
- addr = wallet_service.get_address_from_path(path)
- script = wallet_service.get_script_from_path(path)
+ addr = await wallet_service.get_address_from_path(path)
+ script = await wallet_service.get_script_from_path(path)
balance = balance_by_script.get(script, 0)
status = ('used' if balance else 'empty')
if showprivkey:
@@ -429,14 +447,16 @@ def get_imported_privkey_branch(wallet_service, m, showprivkey):
return WalletViewBranch("m/0", m, -1, branchentries=entries)
return None
-def wallet_showutxos(wallet_service: WalletService, showprivkey: bool,
- limit_mixdepth: Optional[int] = None) -> str:
+async def wallet_showutxos(wallet_service: WalletService, showprivkey: bool,
+ limit_mixdepth: Optional[int] = None) -> str:
unsp = {}
max_tries = jm_single().config.getint("POLICY", "taker_utxo_retries")
- utxos = wallet_service.get_utxos_by_mixdepth(include_disabled=True,
- includeconfs=True, limit_mixdepth=limit_mixdepth)
- for md in utxos:
- (enabled, disabled) = get_utxos_enabled_disabled(wallet_service, md)
+ _utxos = await wallet_service.get_utxos_by_mixdepth(
+ include_disabled=True, includeconfs=True,
+ limit_mixdepth=limit_mixdepth)
+ for md in _utxos:
+ (enabled, disabled) = await get_utxos_enabled_disabled(wallet_service,
+ md)
for u, av in utxos[md].items():
success, us = utxo_to_utxostr(u)
assert success
@@ -491,7 +511,7 @@ def get_utxo_status_string(utxos, utxos_enabled, path):
utxo_status_string += ' [PENDING]'
return utxo_status_string
-def wallet_display(wallet_service, showprivkey, displayall=False,
+async def wallet_display(wallet_service, showprivkey, displayall=False,
serialized=True, summarized=False, mixdepth=None, jsonified=False):
"""build the walletview object,
then return its serialization directly if serialized,
@@ -535,8 +555,9 @@ def wallet_display(wallet_service, showprivkey, displayall=False,
acctlist = []
- utxos = wallet_service.get_utxos_by_mixdepth(include_disabled=True, includeconfs=True)
- utxos_enabled = wallet_service.get_utxos_by_mixdepth()
+ utxos = await wallet_service.get_utxos_by_mixdepth(
+ include_disabled=True, includeconfs=True)
+ utxos_enabled = await wallet_service.get_utxos_by_mixdepth()
if mixdepth:
md_range = range(mixdepth, mixdepth + 1)
@@ -557,10 +578,11 @@ def wallet_display(wallet_service, showprivkey, displayall=False,
gap_addrs = []
for k in range(unused_index + wallet_service.gap_limit):
path = wallet_service.get_path(m, address_type, k)
- addr = wallet_service.get_address_from_path(path)
+ addr = await wallet_service.get_address_from_path(path)
if k >= unused_index:
- if isinstance(wallet_service.wallet, TaprootWallet):
- P = wallet_service.get_pubkey(m, address_type, k)
+ if isinstance(wallet_service.wallet,
+ (TaprootWallet, FrostWallet)):
+ P = await wallet_service.get_pubkey(m, address_type, k)
desc = f'tr({bytes(P)[1:].hex()})'
gap_addrs.append(f'{descsum_create(desc)}')
else:
@@ -584,7 +606,8 @@ def wallet_display(wallet_service, showprivkey, displayall=False,
# displayed for user deposit.
# It also does not apply to fidelity bond addresses which are created manually.
if address_type == BaseWallet.ADDRESS_TYPE_EXTERNAL and wallet_service.bci is not None:
- if isinstance(wallet_service.wallet, TaprootWallet):
+ if isinstance(wallet_service.wallet,
+ (TaprootWallet, FrostWallet)):
wallet_service.bci.import_descriptors(
gap_addrs, wallet_service.get_wallet_name())
else:
@@ -601,7 +624,7 @@ def wallet_display(wallet_service, showprivkey, displayall=False,
entrylist = []
for timenumber in range(FidelityBondMixin.TIMENUMBER_COUNT):
path = wallet_service.get_path(m, address_type, timenumber)
- addr = wallet_service.get_address_from_path(path)
+ addr = await wallet_service.get_address_from_path(path)
label = wallet_service.get_address_label(addr)
timelock = datetime.utcfromtimestamp(0) + timedelta(seconds=path[-1])
@@ -666,7 +689,7 @@ def wallet_display(wallet_service, showprivkey, displayall=False,
branchlist.append(WalletViewBranch(path, m, address_type, entrylist,
xpub=xpub_key))
- ipb = get_imported_privkey_branch(wallet_service, m, showprivkey)
+ ipb = await get_imported_privkey_branch(wallet_service, m, showprivkey)
if ipb:
branchlist.append(ipb)
#get the xpub key of the whole account
@@ -732,17 +755,18 @@ def cli_do_support_fidelity_bonds() -> bool:
jmprint("Not supporting fidelity bonds", "info")
return False
-def wallet_generate_recover_bip39(method: str,
- walletspath: str,
- default_wallet_name: str,
- display_seed_callback: Callable[[str, str], None],
- enter_seed_callback: Optional[Callable[[], Tuple[Optional[str], Optional[str]]]],
- enter_wallet_password_callback: Callable[[], str],
- enter_wallet_file_name_callback: Callable[[], str],
- enter_if_use_seed_extension: Optional[Callable[[], bool]],
- enter_seed_extension_callback: Optional[Callable[[], Optional[str]]],
- enter_do_support_fidelity_bonds: Callable[[], bool],
- mixdepth: int = DEFAULT_MIXDEPTH) -> bool:
+async def wallet_generate_recover_bip39(
+ method: str,
+ walletspath: str,
+ default_wallet_name: str,
+ display_seed_callback: Callable[[str, str], None],
+ enter_seed_callback: Optional[Callable[[], Tuple[Optional[str], Optional[str]]]],
+ enter_wallet_password_callback: Callable[[], str],
+ enter_wallet_file_name_callback: Callable[[], str],
+ enter_if_use_seed_extension: Optional[Callable[[], bool]],
+ enter_seed_extension_callback: Optional[Callable[[], Optional[str]]],
+ enter_do_support_fidelity_bonds: Callable[[], bool],
+ mixdepth: int = DEFAULT_MIXDEPTH) -> bool:
entropy = None
mnemonic_extension = None
if method == "generate":
@@ -777,32 +801,41 @@ def wallet_generate_recover_bip39(method: str,
if not wallet_name:
wallet_name = default_wallet_name
wallet_path = os.path.join(walletspath, wallet_name)
- if is_taproot_mode():
+ if is_taproot_mode() or is_frost_mode():
support_fidelity_bonds = False
else:
support_fidelity_bonds = enter_do_support_fidelity_bonds()
wallet_cls = get_wallet_cls(get_configured_wallet_type(support_fidelity_bonds))
- wallet = create_wallet(wallet_path, password, mixdepth, wallet_cls,
- entropy=entropy,
- entropy_extension=mnemonic_extension)
+ wallet = await create_wallet(
+ wallet_path, password, mixdepth, wallet_cls, entropy=entropy,
+ entropy_extension=mnemonic_extension)
mnemonic, mnext = wallet.get_mnemonic_words()
display_seed_callback and display_seed_callback(mnemonic, mnext or '')
wallet.close()
return True
-def wallet_generate_recover(method, walletspath,
- default_wallet_name='wallet.jmdat',
- mixdepth=DEFAULT_MIXDEPTH):
- if is_taproot_mode():
- return wallet_generate_recover_bip39(method, walletspath,
+async def wallet_generate_recover(method, walletspath,
+ default_wallet_name='wallet.jmdat',
+ mixdepth=DEFAULT_MIXDEPTH):
+ if is_frost_mode():
+ return await wallet_generate_recover_bip39(
+ method, walletspath,
+ default_wallet_name, cli_display_user_words, cli_user_mnemonic_entry,
+ cli_get_wallet_passphrase_check, cli_get_wallet_file_name,
+ cli_do_use_mnemonic_extension, cli_get_mnemonic_extension,
+ cli_do_support_fidelity_bonds, mixdepth=mixdepth)
+ elif is_taproot_mode():
+ return await wallet_generate_recover_bip39(
+ method, walletspath,
default_wallet_name, cli_display_user_words, cli_user_mnemonic_entry,
cli_get_wallet_passphrase_check, cli_get_wallet_file_name,
cli_do_use_mnemonic_extension, cli_get_mnemonic_extension,
cli_do_support_fidelity_bonds, mixdepth=mixdepth)
elif is_segwit_mode():
#Here using default callbacks for scripts (not used in Qt)
- return wallet_generate_recover_bip39(method, walletspath,
+ return await wallet_generate_recover_bip39(
+ method, walletspath,
default_wallet_name, cli_display_user_words, cli_user_mnemonic_entry,
cli_get_wallet_passphrase_check, cli_get_wallet_file_name,
cli_do_use_mnemonic_extension, cli_get_mnemonic_extension,
@@ -829,8 +862,8 @@ def wallet_generate_recover(method, walletspath,
wallet_name = default_wallet_name
wallet_path = os.path.join(walletspath, wallet_name)
- wallet = create_wallet(wallet_path, password, mixdepth,
- wallet_cls=LegacyWallet, entropy=entropy)
+ wallet = await create_wallet(wallet_path, password, mixdepth,
+ wallet_cls=LegacyWallet, entropy=entropy)
jmprint("Write down and safely store this wallet recovery seed\n\n{}\n"
.format(wallet.get_mnemonic_words()[0]), "important")
wallet.close()
@@ -845,7 +878,7 @@ def wallet_change_passphrase(walletservice,
return True
-def wallet_fetch_history(wallet, options):
+async def wallet_fetch_history(wallet, options):
# sort txes in a db because python can be really bad with large lists
con = sqlite3.connect(":memory:")
con.row_factory = dict_factory
@@ -876,8 +909,8 @@ def wallet_fetch_history(wallet, options):
'FROM transactions '
'WHERE (blockhash IS NOT NULL AND blocktime IS NOT NULL) OR conflicts = 0 '
'ORDER BY blocktime').fetchall()
- wallet_script_set = set(wallet.get_script_from_path(p)
- for p in wallet.yield_known_paths())
+ wallet_script_set = set([await wallet.get_script_from_path(p)
+ for p in wallet.yield_known_paths()])
def s():
return ',' if options.csv else ' '
@@ -1129,8 +1162,8 @@ def wallet_fetch_history(wallet, options):
jmprint(('BUG ERROR: wallet balance (%s) does not match balance from ' +
'history (%s)') % (btc.sat_to_str(total_wallet_balance),
btc.sat_to_str(balance)))
- wallet_utxo_count = sum(map(len, wallet.get_utxos_by_mixdepth(
- include_disabled=True).values()))
+ _utxos = await wallet.get_utxos_by_mixdepth(include_disabled=True)
+ wallet_utxo_count = sum(map(len, _utxos.values()))
if utxo_count + unconfirmed_utxo_count != wallet_utxo_count:
jmprint(('BUG ERROR: wallet utxo count (%d) does not match utxo count from ' +
'history (%s)') % (wallet_utxo_count, utxo_count))
@@ -1150,7 +1183,7 @@ def wallet_showseed(wallet):
return text
-def wallet_importprivkey(wallet, mixdepth):
+async def wallet_importprivkey(wallet, mixdepth):
jmprint("WARNING: This imported key will not be recoverable with your 12 "
"word mnemonic phrase. Make sure you have backups.", "warning")
jmprint("WARNING: Make sure that the type of the public address previously "
@@ -1173,7 +1206,7 @@ def wallet_importprivkey(wallet, mixdepth):
print("Failed to import key {}: {}".format(wif, e))
import_failed += 1
else:
- imported_addr.append(wallet.get_address_from_path(path))
+ imported_addr.append(await wallet.get_address_from_path(path))
if not imported_addr:
jmprint("Warning: No keys imported!", "error")
@@ -1197,8 +1230,9 @@ def wallet_dumpprivkey(wallet, hdpath):
return wallet.get_wif_path(path) # will raise exception on invalid path
-def wallet_signmessage(wallet, hdpath: str, message: str,
- out_str: bool = True) -> Union[Tuple[str, str, str], str]:
+async def wallet_signmessage(
+ wallet, hdpath: str, message: str,
+ out_str: bool = True) -> Union[Tuple[str, str, str], str]:
""" Given a wallet, a BIP32 HD path (as can be output
from the display method) and a message string, returns
a base64 encoded signature along with the corresponding
@@ -1217,18 +1251,18 @@ def wallet_signmessage(wallet, hdpath: str, message: str,
return "Error: no message specified"
path = wallet.path_repr_to_path(hdpath)
- addr, sig = wallet.sign_message(msg, path)
+ addr, sig = await wallet.sign_message(msg, path)
if not out_str:
return (sig, message, addr)
return ("Signature: {}\nMessage: {}\nAddress: {}\n"
"To verify this in Electrum use Tools->Sign/verify "
"message.".format(sig, message, addr))
-def wallet_signpsbt(wallet_service, psbt):
+async def wallet_signpsbt(wallet_service, psbt):
if not psbt:
return "Error: no PSBT specified"
- signed_psbt_and_signresult, err = wallet_service.sign_psbt(
+ signed_psbt_and_signresult, err = await wallet_service.sign_psbt(
base64.b64decode(psbt.encode('ascii')), with_sign_result=True)
if err:
return "Failed to sign PSBT, quitting. Error message: {}".format(err)
@@ -1254,8 +1288,9 @@ def wallet_signpsbt(wallet_service, psbt):
"inputs.".format(signresult.num_inputs_signed))
return ""
-def display_utxos_for_disable_choice_default(wallet_service, utxos_enabled,
- utxos_disabled):
+async def display_utxos_for_disable_choice_default(wallet_service,
+ utxos_enabled,
+ utxos_disabled):
""" CLI implementation of the callback required as described in
wallet_disableutxo
"""
@@ -1276,20 +1311,21 @@ def display_utxos_for_disable_choice_default(wallet_service, utxos_enabled,
break
return ret
- def output_utxos(utxos, status, start=0):
+ async def output_utxos(utxos, status, start=0):
for (txid, idx), v in utxos.items():
value = v['value']
jmprint("{:4}: {} ({}): {} -- {}".format(
start, fmt_utxo((txid, idx)),
- wallet_service.wallet.script_to_addr(v["script"]),
+ await wallet_service.wallet.script_to_addr(v["script"]),
btc.amount_to_str(value), status))
start += 1
yield txid, idx
jmprint("List of UTXOs:")
- ulist = list(output_utxos(utxos_disabled, 'FROZEN'))
+ ulist = list(await output_utxos(utxos_disabled, 'FROZEN'))
disabled_max = len(ulist) - 1
- ulist.extend(output_utxos(utxos_enabled, 'NOT FROZEN', start=len(ulist)))
+ ulist.extend(await output_utxos(utxos_enabled, 'NOT FROZEN',
+ start=len(ulist)))
max_id = len(ulist) - 1
chosen_idx = default_user_choice(max_id)
if chosen_idx == -1:
@@ -1301,19 +1337,21 @@ def display_utxos_for_disable_choice_default(wallet_service, utxos_enabled,
disable = False if chosen_idx <= disabled_max else True
return ulist[chosen_idx], disable
-def get_utxos_enabled_disabled(wallet_service: WalletService,
+async def get_utxos_enabled_disabled(wallet_service: WalletService,
md: int) -> Tuple[dict, dict]:
""" Returns dicts for enabled and disabled separately
"""
- utxos_enabled = wallet_service.get_utxos_at_mixdepth(md)
- utxos_all = wallet_service.get_utxos_at_mixdepth(md, include_disabled=True)
+ utxos_enabled = await wallet_service.get_utxos_at_mixdepth(md)
+ utxos_all = await wallet_service.get_utxos_at_mixdepth(
+ md, include_disabled=True)
utxos_disabled_keyset = set(utxos_all).difference(set(utxos_enabled))
utxos_disabled = {}
for u in utxos_disabled_keyset:
utxos_disabled[u] = utxos_all[u]
return utxos_enabled, utxos_disabled
-def wallet_freezeutxo(wallet_service, md, display_callback=None, info_callback=None):
+async def wallet_freezeutxo(wallet_service, md,
+ display_callback=None, info_callback=None):
""" Given a wallet and a mixdepth, display to the user
the set of available utxos, indexed by integer, and accept a choice
of index to "freeze", then commit this disabling to the wallet storage,
@@ -1344,14 +1382,18 @@ def wallet_freezeutxo(wallet_service, md, display_callback=None, info_callback=N
info_callback("Specify the mixdepth with the -m flag", "error")
return "Failed"
while True:
- utxos_enabled, utxos_disabled = get_utxos_enabled_disabled(
+ utxos_enabled, utxos_disabled = await get_utxos_enabled_disabled(
wallet_service, md)
if utxos_disabled == {} and utxos_enabled == {}:
info_callback("The mixdepth: " + str(md) + \
" contains no utxos to freeze/unfreeze.", "error")
return "Failed"
- display_ret = display_callback(wallet_service,
- utxos_enabled, utxos_disabled)
+ if asyncio.iscoroutine(display_callback):
+ display_ret = await display_callback(wallet_service,
+ utxos_enabled, utxos_disabled)
+ else:
+ display_ret = display_callback(wallet_service,
+ utxos_enabled, utxos_disabled)
if display_ret is None:
break
if display_ret == "all":
@@ -1373,7 +1415,7 @@ def wallet_freezeutxo(wallet_service, md, display_callback=None, info_callback=N
-def wallet_gettimelockaddress(wallet, locktime_string):
+async def wallet_gettimelockaddress(wallet, locktime_string):
if not isinstance(wallet, FidelityBondMixin):
jmprint("Error: not a fidelity bond wallet", "error")
return ""
@@ -1400,7 +1442,7 @@ def wallet_gettimelockaddress(wallet, locktime_string):
+ " not linked to your identity. Also, use a sweep transaction when funding the"
+ " timelocked address, i.e. Don't create a change address. See the privacy warnings in"
+ " fidelity-bonds.md")
- addr = wallet.get_address_from_path(path)
+ addr = await wallet.get_address_from_path(path)
return addr
def wallet_addtxoutproof(wallet_service, hdpath, txoutproof):
@@ -1424,7 +1466,7 @@ def wallet_addtxoutproof(wallet_service, hdpath, txoutproof):
new_merkle_branch, block_index)
return "Done"
-def wallet_createwatchonly(wallet_root_path, master_pub_key):
+async def wallet_createwatchonly(wallet_root_path, master_pub_key):
wallet_name = cli_get_wallet_file_name(defaultname="watchonly.jmdat")
if not wallet_name:
@@ -1443,13 +1485,15 @@ def wallet_createwatchonly(wallet_root_path, master_pub_key):
return ""
entropy = entropy.encode()
- wallet = create_wallet(wallet_path, password,
+ wallet = await create_wallet(wallet_path, password,
max_mixdepth=FidelityBondMixin.FIDELITY_BOND_MIXDEPTH,
wallet_cls=FidelityBondWatchonlyWallet, entropy=entropy)
return "Done"
def get_configured_wallet_type(support_fidelity_bonds):
configured_type = TYPE_P2PKH
+ if is_frost_mode():
+ return TYPE_P2TR_FROST
if is_taproot_mode():
return TYPE_P2TR
elif is_segwit_mode():
@@ -1474,17 +1518,36 @@ def get_wallet_cls(wtype):
"".format(wtype))
return cls
-def create_wallet(path, password, max_mixdepth, wallet_cls, **kwargs):
+async def create_wallet(path, password, max_mixdepth, wallet_cls, **kwargs):
storage = Storage(path, password, create=True)
- wallet_cls.initialize(storage, get_network(), max_mixdepth=max_mixdepth,
- **kwargs)
- storage.save()
- return wallet_cls(storage,
- gap_limit=jm_single().config.getint("POLICY", "gaplimit"))
+ gap_limit = jm_single().config.getint("POLICY", "gaplimit")
+ if wallet_cls == FrostWallet:
+ dkg_path = DKGStorage.dkg_path(path)
+ dkg_storage = DKGStorage(dkg_path, create=True)
+ dkg_recovery_path = DKGRecoveryStorage.dkg_recovery_path(path)
+ recovery_storage = DKGRecoveryStorage(dkg_recovery_path, create=True)
+ wallet_cls.initialize(storage, dkg_storage, recovery_storage,
+ get_network(), max_mixdepth=max_mixdepth,
+ **kwargs)
+ storage.save()
+ dkg_storage.save()
+ recovery_storage.save()
+ wallet = wallet_cls(storage, dkg_storage, recovery_storage,
+ gap_limit=gap_limit)
+ await wallet.async_init(storage, gap_limit=gap_limit)
+ return wallet
+ else:
+ wallet_cls.initialize(storage, get_network(),
+ max_mixdepth=max_mixdepth, **kwargs)
+ storage.save()
+ wallet = wallet_cls(storage, gap_limit=gap_limit)
+ await wallet.async_init(storage, gap_limit=gap_limit)
+ return wallet
-def open_test_wallet_maybe(path, seed, max_mixdepth,
- test_wallet_cls=SegwitWallet, wallet_password_stdin=False, **kwargs):
+async def open_test_wallet_maybe(
+ path, seed, max_mixdepth, test_wallet_cls=SegwitWallet,
+ wallet_password_stdin=False, **kwargs):
"""
Create a volatile test wallet if path is a hex-encoded string of length 64,
otherwise run open_wallet().
@@ -1528,13 +1591,16 @@ def open_test_wallet_maybe(path, seed, max_mixdepth,
if wallet_password_stdin is True:
password = read_password_stdin()
- return open_wallet(path, ask_for_password=False, password=password, mixdepth=max_mixdepth, **kwargs)
+ return await open_wallet(
+ path, ask_for_password=False, password=password,
+ mixdepth=max_mixdepth, **kwargs)
- return open_wallet(path, mixdepth=max_mixdepth, **kwargs)
+ return await open_wallet(path, mixdepth=max_mixdepth, **kwargs)
-def open_wallet(path, ask_for_password=True, password=None, read_only=False,
- **kwargs):
+async def open_wallet(path, ask_for_password=True, password=None,
+ read_only=False, load_dkg=False, dkg_read_only=True,
+ **kwargs):
"""
Open the wallet file at path and return the corresponding wallet object.
@@ -1549,6 +1615,9 @@ def open_wallet(path, ask_for_password=True, password=None, read_only=False,
returns:
wallet object
"""
+ if not read_only and not dkg_read_only:
+ raise Exception('open_wallet: params read_only and dkg_read_only'
+ ' can not be mutually unset')
if not os.path.isfile(path):
raise Exception("Failed to open wallet at '{}': not a file".format(path))
@@ -1580,7 +1649,51 @@ def open_wallet(path, ask_for_password=True, password=None, read_only=False,
if jm_single().config.get("POLICY", "wallet_caching_disabled") == "true":
load_cache = False
wallet_cls = get_wallet_cls_from_storage(storage)
- wallet = wallet_cls(storage, load_cache=load_cache, **kwargs)
+
+ if wallet_cls == FrostWallet:
+ dkg_storage = None
+ recovery_storage = None
+ if load_dkg:
+ dkg_path = DKGStorage.dkg_path(path)
+ if not os.path.isfile(dkg_path):
+ raise Exception(f"Failed to open DKG File at "
+ f"'{dkg_path}': not a file")
+ if not DKGStorage.is_storage_file(dkg_path):
+ raise Exception(f"Failed to open DKG File at "
+ f"'{dkg_path}': not a valid file magic.")
+ try:
+ if not dkg_read_only:
+ DKGStorage.verify_lock(dkg_path)
+ dkg_storage = DKGStorage(dkg_path, read_only=dkg_read_only)
+ except Exception as e:
+ jmprint(f"Failed to load DKG File, "
+ f"error message: {repr(e)}",
+ "error")
+ raise e
+ dkg_recovery_path = DKGRecoveryStorage.dkg_recovery_path(path)
+ if not os.path.isfile(dkg_recovery_path):
+ raise Exception(f"Failed to open DKG Recovery File at "
+ f"'{dkg_recovery_path}': not a file")
+ if not DKGRecoveryStorage.is_storage_file(dkg_recovery_path):
+ raise Exception(f"Failed to open DKG Recovery File at "
+ f"'{dkg_recovery_path}': not a valid "
+ f"file magic.")
+ try:
+ if not dkg_read_only:
+ DKGRecoveryStorage.verify_lock(dkg_recovery_path)
+ recovery_storage = DKGRecoveryStorage(
+ dkg_recovery_path, read_only=dkg_read_only)
+ except Exception as e:
+ jmprint(f"Failed to load DKG Recovery File, "
+ f"error message: {repr(e)}",
+ "error")
+ raise e
+ wallet = wallet_cls(storage, dkg_storage, recovery_storage,
+ load_cache=load_cache, **kwargs)
+ await wallet.async_init(storage, load_cache=load_cache, **kwargs)
+ else:
+ wallet = wallet_cls(storage, load_cache=load_cache, **kwargs)
+ await wallet.async_init(storage, load_cache=load_cache, **kwargs)
wallet_sanity_check(wallet)
return wallet
@@ -1610,7 +1723,7 @@ def read_password_stdin():
return sys.stdin.readline().replace('\n','').encode('utf-8')
-def wallet_tool_main(wallet_root_path):
+async def wallet_tool_main(wallet_root_path):
"""Main wallet tool script function; returned is a string (output or error)
"""
parser = get_wallettool_parser()
@@ -1629,14 +1742,23 @@ def wallet_tool_main(wallet_root_path):
readonly_methods = ['display', 'displayall', 'summary', 'showseed',
'history', 'showutxos', 'dumpprivkey', 'signmessage',
'gettimelockaddress']
+ # FrostWallet related methods
+ frost_load_dkg_methods = ['hostpubkey', 'servefrost', 'dkgrecover',
+ 'dkgls', 'dkgrm', 'recdkgls', 'recdkgrm']
+ frost_noscan_methods = ['hostpubkey', 'servefrost', 'dkgrecover',
+ 'dkgls', 'dkgrm', 'recdkgls', 'recdkgrm',
+ 'testfrost']
+ frost_readonly_methods = ['hostpubkey', 'dkgls', 'recdkgls', 'testfrost']
+ noscan_methods.extend(frost_noscan_methods)
+ readonly_methods.extend(frost_readonly_methods)
if len(args) < 1:
parser.error('Needs a wallet file or method')
- sys.exit(EXIT_ARGERROR)
+ twisted_sys_exit(EXIT_ARGERROR)
if options.mixdepth is not None and options.mixdepth < 0:
parser.error("Must have at least one mixdepth.")
- sys.exit(EXIT_ARGERROR)
+ twisted_sys_exit(EXIT_ARGERROR)
if args[0] in noseed_methods:
method = args[0]
@@ -1651,8 +1773,14 @@ def wallet_tool_main(wallet_root_path):
if method in noseed_methods:
parser.error("The method '" + method + \
"' is not compatible with a wallet filename.")
- sys.exit(EXIT_ARGERROR)
-
+ twisted_sys_exit(EXIT_ARGERROR)
+
+ config = jm_single().config
+ if config.has_option('POLICY', 'frost'):
+ policy_frost = config.get("POLICY", "frost")
+ if policy_frost == 'true':
+ readonly_methods.remove('display')
+ readonly_methods.remove('displayall')
read_only = method in readonly_methods
#special case needed for fidelity bond burner outputs
@@ -1660,15 +1788,29 @@ def wallet_tool_main(wallet_root_path):
if options.recoversync:
read_only = False
- wallet = open_test_wallet_maybe(
+ if method in frost_load_dkg_methods:
+ load_dkg = True
+ read_only = True
+ dkg_read_only = True if method in frost_readonly_methods else False
+ else:
+ load_dkg = False
+ dkg_read_only = True
+ wallet = await open_test_wallet_maybe(
wallet_path, seed, options.mixdepth, read_only=read_only,
+ load_dkg=load_dkg, dkg_read_only=dkg_read_only,
wallet_password_stdin=options.wallet_password_stdin, gap_limit=options.gaplimit)
# this object is only to respect the layering,
# the service will not be started since this is a synchronous script:
wallet_service = WalletService(wallet)
if wallet_service.rpc_error:
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
+
+ if (isinstance(wallet, FrostWallet) and
+ method not in frost_load_dkg_methods):
+ ipc_client = FrostIPCClient(wallet)
+ await ipc_client.async_init()
+ wallet.set_ipc_client(ipc_client)
if method not in noscan_methods and jm_single().bc_interface is not None:
# if nothing was configured, we override bitcoind's options so that
@@ -1676,41 +1818,42 @@ def wallet_tool_main(wallet_root_path):
if 'listunspent_args' not in jm_single().config.options('POLICY'):
jm_single().config.set('POLICY','listunspent_args', '[0]')
while True:
- if wallet_service.sync_wallet(fast = not options.recoversync):
+ if await wallet_service.sync_wallet(
+ fast=not options.recoversync):
break
#Now the wallet/data is prepared, execute the script according to the method
if method == "display":
- return wallet_display(wallet_service, options.showprivkey,
+ return await wallet_display(wallet_service, options.showprivkey,
mixdepth=options.mixdepth)
elif method == "displayall":
- return wallet_display(wallet_service, options.showprivkey,
+ return await wallet_display(wallet_service, options.showprivkey,
displayall=True, mixdepth=options.mixdepth)
elif method == "summary":
- return wallet_display(wallet_service, options.showprivkey,
+ return await wallet_display(wallet_service, options.showprivkey,
summarized=True, mixdepth=options.mixdepth)
elif method == "history":
if not isinstance(jm_single().bc_interface, BitcoinCoreInterface):
jmprint('showing history only available when using the Bitcoin Core ' +
'blockchain interface', "error")
- sys.exit(EXIT_ARGERROR)
+ twisted_sys_exit(EXIT_ARGERROR)
else:
- return wallet_fetch_history(wallet_service, options)
+ return await wallet_fetch_history(wallet_service, options)
elif method == "generate":
- retval = wallet_generate_recover("generate", wallet_root_path,
- mixdepth=options.mixdepth)
+ retval = await wallet_generate_recover(
+ "generate", wallet_root_path, mixdepth=options.mixdepth)
return "Generated wallet OK" if retval else "Failed"
elif method == "recover":
- retval = wallet_generate_recover("recover", wallet_root_path,
- mixdepth=options.mixdepth)
+ retval = await wallet_generate_recover(
+ "recover", wallet_root_path, mixdepth=options.mixdepth)
return "Recovered wallet OK" if retval else "Failed"
elif method == "changepass":
retval = wallet_change_passphrase(wallet_service)
return "Changed encryption passphrase OK" if retval else "Failed"
elif method == "showutxos":
- return wallet_showutxos(wallet_service,
- showprivkey=options.showprivkey,
- limit_mixdepth=options.mixdepth)
+ return await wallet_showutxos(wallet_service,
+ showprivkey=options.showprivkey,
+ limit_mixdepth=options.mixdepth)
elif method == "showseed":
return wallet_showseed(wallet_service)
elif method == "dumpprivkey":
@@ -1719,46 +1862,126 @@ def wallet_tool_main(wallet_root_path):
#note: must be interactive (security)
if options.mixdepth is None:
parser.error("You need to specify a mixdepth with -m")
- wallet_importprivkey(wallet_service, options.mixdepth)
+ await wallet_importprivkey(wallet_service, options.mixdepth)
return "Key import completed."
elif method == "signmessage":
if len(args) < 3:
jmprint('Must provide message to sign', "error")
- sys.exit(EXIT_ARGERROR)
- return wallet_signmessage(wallet_service, options.hd_path, args[2])
+ twisted_sys_exit(EXIT_ARGERROR)
+ return await wallet_signmessage(
+ wallet_service, options.hd_path, args[2])
elif method == "signpsbt":
if len(args) < 3:
jmprint("Must provide PSBT to sign", "error")
- sys.exit(EXIT_ARGERROR)
- return wallet_signpsbt(wallet_service, args[2])
+ twisted_sys_exit(EXIT_ARGERROR)
+ return await wallet_signpsbt(wallet_service, args[2])
elif method == "freeze":
- return wallet_freezeutxo(wallet_service, options.mixdepth)
+ return await wallet_freezeutxo(wallet_service, options.mixdepth)
elif method == "gettimelockaddress":
if len(args) < 3:
jmprint('Must have locktime value yyyy-mm. For example 2021-03', "error")
- sys.exit(EXIT_ARGERROR)
- return wallet_gettimelockaddress(wallet_service.wallet, args[2])
+ twisted_sys_exit(EXIT_ARGERROR)
+ return await wallet_gettimelockaddress(wallet_service.wallet, args[2])
elif method == "addtxoutproof":
if len(args) < 3:
jmprint('Must have txout proof, which is the output of Bitcoin '
+ 'Core\'s RPC call gettxoutproof', "error")
- sys.exit(EXIT_ARGERROR)
+ twisted_sys_exit(EXIT_ARGERROR)
return wallet_addtxoutproof(wallet_service, options.hd_path, args[2])
elif method == "createwatchonly":
if len(args) < 2:
jmprint("args: [master public key]", "error")
- sys.exit(EXIT_ARGERROR)
- return wallet_createwatchonly(wallet_root_path, args[1])
+ twisted_sys_exit(EXIT_ARGERROR)
+ return await wallet_createwatchonly(wallet_root_path, args[1])
elif method == "setlabel":
if len(args) < 4:
jmprint("args: address label", "error")
- sys.exit(EXIT_ARGERROR)
+ twisted_sys_exit(EXIT_ARGERROR)
wallet.set_address_label(args[2], args[3])
if args[3]:
return "Address label set"
else:
return "Address label removed"
+ elif method == "servefrost":
+ if not isinstance(wallet, FrostWallet):
+ return 'Command "servefrost" used only for FROST wallets'
+ client = FROSTClient(wallet_service)
+ cfactory = JMClientProtocolFactory(client, proto_type="MAKER")
+ wallet.set_client_factory(cfactory)
+
+ async def wait_jm_up():
+ while True:
+ await asyncio.sleep(1)
+ if client.jm_up:
+ break
+
+ start_reactor(
+ jm_single().config.get("DAEMON", "daemon_host"),
+ jm_single().config.getint("DAEMON", "daemon_port"),
+ cfactory,
+ ish=True,
+ daemon=True,
+ gui=True)
+ await wait_jm_up()
+ ipc_server = FrostIPCServer(wallet)
+ await ipc_server.async_init()
+ await ipc_server.serve_forever()
+ return
+ elif method == "testfrost":
+ if not isinstance(wallet, FrostWallet):
+ return 'Command "testfrost" used only for FROST wallets'
+ from hashlib import sha256
+ from bitcointx.core.key import XOnlyPubKey
+ msg = 'testmsg'
+ md = address_type = index = 0
+ msghash = sha256(msg.encode()).digest()
+ sig, pubkey, tweaked_pubkey = await wallet.ipc_client.frost_sign(
+ md, address_type, index, msghash)
+ verify_pubkey = XOnlyPubKey(tweaked_pubkey[1:])
+ if verify_pubkey.verify_schnorr(msghash, sig):
+ return "Schnorr signature successfully verified"
+ else:
+ jmprint("Schnorr signature verify failed", "error")
+ return
+ elif method == "hostpubkey":
+ if not isinstance(wallet, FrostWallet):
+ return 'Command "hostpubkey" used only for FROST wallets'
+ hostseckey = wallet._hostseckey
+ if hostseckey:
+ hostpubkey = hostpubkey_gen(hostseckey[:32])
+ return hostpubkey.hex()
+ else:
+ return 'No hostseckey available'
+ elif method == "dkgrecover":
+ if not isinstance(wallet, FrostWallet):
+ return 'Command "dkgrecover" used only for FROST wallets'
+ dkgrec_path = args[2]
+ return await wallet_service.dkg.dkg_recover(dkgrec_path)
+ elif method == "dkgls":
+ if not isinstance(wallet, FrostWallet):
+ return 'Command "dkgls" used only for FROST wallets'
+ return wallet_service.dkg.dkg_ls()
+ elif method == "dkgrm":
+ if not isinstance(wallet, FrostWallet):
+ return 'Command "dkgrm" used only for FROST wallets'
+ session_ids = args[2:]
+ if not session_ids:
+ return jmprint("no session ids specified", "error")
+ session_ids = list(dict.fromkeys(session_ids)) # make unique
+ return wallet_service.dkg.dkg_rm(session_ids)
+ elif method == "recdkgls":
+ if not isinstance(wallet, FrostWallet):
+ return 'Command "recdkgls" used only for FROST wallets'
+ return wallet_service.dkg.recdkg_ls()
+ elif method == "recdkgrm":
+ if not isinstance(wallet, FrostWallet):
+ return 'Command "recdkgrm" used only for FROST wallets'
+ session_ids = args[2:]
+ if not session_ids:
+ return jmprint("no session ids specified", "error")
+ session_ids = list(dict.fromkeys(session_ids)) # make unique
+ return wallet_service.dkg.recdkg_rm(session_ids)
else:
parser.error("Unknown wallet-tool method: " + method)
- sys.exit(EXIT_ARGERROR)
+ twisted_sys_exit(EXIT_ARGERROR)
diff --git a/src/jmclient/yieldgenerator.py b/src/jmclient/yieldgenerator.py
index b7e1c21..bf3554b 100644
--- a/src/jmclient/yieldgenerator.py
+++ b/src/jmclient/yieldgenerator.py
@@ -1,5 +1,6 @@
#! /usr/bin/env python
+import asyncio
import datetime
import os
import time
@@ -14,12 +15,17 @@ from jmclient import (Maker, jm_single, load_program_config,
JMClientProtocolFactory, start_reactor, calc_cj_fee,
WalletService, add_base_options, SNICKERReceiver,
SNICKERClientProtocolFactory, FidelityBondMixin,
- get_interest_rate, fmt_utxo, check_and_start_tor)
+ get_interest_rate, fmt_utxo, check_and_start_tor,
+ FrostWallet)
from .wallet_utils import open_test_wallet_maybe, get_wallet_path
-from jmbase.support import EXIT_ARGERROR, EXIT_FAILURE, get_jm_version_str
+from jmbase.support import (EXIT_ARGERROR, EXIT_FAILURE, get_jm_version_str,
+ twisted_sys_exit)
import jmbitcoin as btc
from jmclient.fidelity_bond import FidelityBond
+from .frost_ipc import FrostIPCClient
+
+
jlog = get_log()
MAX_MIX_DEPTH = 5
@@ -127,7 +133,7 @@ class YieldGeneratorBasic(YieldGenerator):
return [order]
- def get_fidelity_bond_template(self):
+ async def get_fidelity_bond_template(self):
if not isinstance(self.wallet_service.wallet, FidelityBondMixin):
jlog.info("Not a fidelity bond wallet, not announcing fidelity bond")
return None
@@ -140,8 +146,10 @@ class YieldGeneratorBasic(YieldGenerator):
CERT_MAX_VALIDITY_TIME = 1
cert_expiry = ((blocks + BLOCK_COUNT_SAFETY) // RETARGET_INTERVAL) + CERT_MAX_VALIDITY_TIME
- utxos = self.wallet_service.wallet.get_utxos_by_mixdepth(include_disabled=True,
- includeheight=True)[FidelityBondMixin.FIDELITY_BOND_MIXDEPTH]
+ _utxos = await self.wallet_service.wallet.get_utxos_by_mixdepth(
+ include_disabled=True,
+ includeheight=True)
+ utxos = _utxos[FidelityBondMixin.FIDELITY_BOND_MIXDEPTH]
timelocked_utxos = [(outpoint, info) for outpoint, info in utxos.items()
if FidelityBondMixin.is_timelocked_path(info["path"])]
if len(timelocked_utxos) == 0:
@@ -171,7 +179,7 @@ class YieldGeneratorBasic(YieldGenerator):
jlog.info("Announcing fidelity bond coin {}".format(fmt_utxo(utxo)))
return fidelity_bond
- def oid_to_order(self, offer, amount):
+ async def oid_to_order(self, offer, amount):
total_amount = amount + offer["txfee"]
real_cjfee = calc_cj_fee(offer["ordertype"], offer["cjfee"], amount)
required_amount = total_amount + \
@@ -186,7 +194,7 @@ class YieldGeneratorBasic(YieldGenerator):
jlog.debug('mix depths that have enough = ' + str(filtered_mix_balance))
try:
- mixdepth, utxos = self._get_order_inputs(
+ mixdepth, utxos = await self._get_order_inputs(
filtered_mix_balance, offer, required_amount)
except NoIoauthInputException:
jlog.error(
@@ -198,16 +206,17 @@ class YieldGeneratorBasic(YieldGenerator):
jlog.info('filling offer, mixdepth=' + str(mixdepth) + ', amount=' + str(amount))
- cj_addr = self.select_output_address(mixdepth, offer, amount)
+ cj_addr = await self.select_output_address(mixdepth, offer, amount)
if cj_addr is None:
return None, None, None
jlog.info('sending output to address=' + str(cj_addr))
change_amount = sum(u["value"] for u in utxos.values()) - total_amount + real_cjfee
- change_addr = self.select_change_address(mixdepth, change_amount)
+ change_addr = await self.select_change_address(mixdepth, change_amount)
return utxos, cj_addr, change_addr
- def _get_order_inputs(self, filtered_mix_balance, offer, required_amount):
+ async def _get_order_inputs(self, filtered_mix_balance,
+ offer, required_amount):
"""
Select inputs from some applicable mixdepth that has a utxo suitable
for ioauth.
@@ -226,7 +235,7 @@ class YieldGeneratorBasic(YieldGenerator):
while filtered_mix_balance:
mixdepth = self.select_input_mixdepth(
filtered_mix_balance, offer, required_amount)
- utxos = self.wallet_service.select_utxos(
+ utxos = await self.wallet_service.select_utxos(
mixdepth, required_amount, minconfs=1, includeaddr=True,
require_auth_address=True)
if utxos:
@@ -263,19 +272,21 @@ class YieldGeneratorBasic(YieldGenerator):
available = sorted(available.items(), key=lambda entry: entry[0])
return available[0][0]
- def select_output_address(self, input_mixdepth, offer, amount):
+ async def select_output_address(self, input_mixdepth, offer, amount):
"""Returns the address to which the mixed output should be sent for
an order spending from the given input mixdepth. Can return None if
there is no suitable output, in which case the order is
aborted."""
cjoutmix = (input_mixdepth + 1) % (self.wallet_service.mixdepth + 1)
- return self.wallet_service.get_internal_addr(cjoutmix)
+ return await self.wallet_service.get_internal_addr(cjoutmix)
- def select_change_address(self, input_mixdepth: int, change_amount: int) -> str:
+ async def select_change_address(self, input_mixdepth: int,
+ change_amount: int) -> str:
"""Returns the address to which the change should be sent for an
order spending from the given input mixdepth. Must not return
None."""
- return self.wallet_service.get_internal_addr(input_mixdepth)
+ return await self.wallet_service.get_internal_addr(input_mixdepth)
+
class YieldGeneratorService(Service):
def __init__(self, wallet_service, daemon_host, daemon_port, yg_config):
@@ -301,15 +312,20 @@ class YieldGeneratorService(Service):
# we do not catch Exceptions in setup,
# deliberately; this must be caught and distinguished
# by whoever started the service.
- setup()
+ if asyncio.iscoroutine(setup):
+ raise NotImplementedError() # FIXME
+ else:
+ setup()
# TODO genericise to any YG class:
self.yieldgen = YieldGeneratorBasic(self.wallet_service, self.yg_config)
self.clientfactory = JMClientProtocolFactory(self.yieldgen, proto_type="MAKER")
+ wallet = self.wallet_service.wallet
# here 'start_reactor' does not start the reactor but instantiates
# the connection to the daemon backend; note daemon=False, i.e. the daemon
# backend is assumed to be started elsewhere; we just connect to it with a client.
- start_reactor(self.daemon_host, self.daemon_port, self.clientfactory, rs=False)
+ start_reactor(self.daemon_host, self.daemon_port, self.clientfactory,
+ rs=False, gui=True)
# monitor the Maker object, just to check if it's still in an "up" state, marked
# by the aborted instance var:
self.monitor_loop = task.LoopingCall(self.monitor)
@@ -351,7 +367,7 @@ class YieldGeneratorService(Service):
def isRunning(self):
return self.running == 1
-def ygmain(ygclass, nickserv_password='', gaplimit=6):
+async def ygmain(ygclass, nickserv_password='', gaplimit=6):
import sys
parser = OptionParser(usage='usage: %prog [options] [wallet file]')
@@ -406,7 +422,7 @@ def ygmain(ygclass, nickserv_password='', gaplimit=6):
options = vars(options)
if len(args) < 1:
parser.error('Needs a wallet')
- sys.exit(EXIT_ARGERROR)
+ twisted_sys_exit(EXIT_ARGERROR)
load_program_config(config_path=options["datadir"])
@@ -437,24 +453,25 @@ def ygmain(ygclass, nickserv_password='', gaplimit=6):
else:
parser.error('You specified an incorrect offer type which ' +\
'can be either reloffer or absoffer')
- sys.exit(EXIT_ARGERROR)
+ twisted_sys_exit(EXIT_ARGERROR)
nickserv_password = options["password"]
if jm_single().bc_interface is None:
jlog.error("Running yield generator requires configured " +
"blockchain source.")
- sys.exit(EXIT_FAILURE)
+ twisted_sys_exit(EXIT_FAILURE)
wallet_path = get_wallet_path(wallet_name, None)
- wallet = open_test_wallet_maybe(
+ wallet = await open_test_wallet_maybe(
wallet_path, wallet_name, options["mixdepth"],
wallet_password_stdin=options["wallet_password_stdin"],
gap_limit=options["gaplimit"])
+ if isinstance(wallet, FrostWallet):
+ ipc_client = FrostIPCClient(wallet)
+ await ipc_client.async_init()
+ wallet.set_ipc_client(ipc_client)
wallet_service = WalletService(wallet)
- while not wallet_service.synced:
- wallet_service.sync_wallet(fast=not options["recoversync"])
- wallet_service.startService()
txtype = wallet_service.get_txtype()
if txtype == "p2tr":
@@ -467,7 +484,7 @@ def ygmain(ygclass, nickserv_password='', gaplimit=6):
prefix = ""
else:
jlog.error("Unsupported wallet type for yieldgenerator: " + txtype)
- sys.exit(EXIT_ARGERROR)
+ twisted_sys_exit(EXIT_ARGERROR)
ordertype = prefix + ordertype
jlog.debug("Set the offer type string to: " + ordertype)
@@ -483,7 +500,7 @@ def ygmain(ygclass, nickserv_password='', gaplimit=6):
"yet supported for yieldgenerators; either use "
"signet/regtest/testnet, or run SNICKER manually "
"with snicker/receive-snicker.py.")
- sys.exit(EXIT_ARGERROR)
+ twisted_sys_exit(EXIT_ARGERROR)
snicker_r = SNICKERReceiver(wallet_service)
servers = jm_single().config.get("SNICKER", "servers").split(",")
snicker_factory = SNICKERClientProtocolFactory(snicker_r, servers)
@@ -493,7 +510,11 @@ def ygmain(ygclass, nickserv_password='', gaplimit=6):
daemon = True if nodaemon == 1 else False
if jm_single().config.get("BLOCKCHAIN", "network") in ["regtest", "testnet", "signet"]:
startLogging(sys.stdout)
+
start_reactor(jm_single().config.get("DAEMON", "daemon_host"),
- jm_single().config.getint("DAEMON", "daemon_port"),
- clientfactory, snickerfactory=snicker_factory,
- daemon=daemon)
+ jm_single().config.getint("DAEMON", "daemon_port"),
+ clientfactory, snickerfactory=snicker_factory,
+ daemon=daemon, gui=True)
+ while not wallet_service.synced:
+ await wallet_service.sync_wallet(fast=not options["recoversync"])
+ wallet_service.startService()
diff --git a/src/jmdaemon/__init__.py b/src/jmdaemon/__init__.py
index fc1c407..adcf472 100644
--- a/src/jmdaemon/__init__.py
+++ b/src/jmdaemon/__init__.py
@@ -1,5 +1,7 @@
+# -*- coding: utf-8 -*-
import logging
+
from .protocol import *
from .enc_wrapper import as_init_encryption, decode_decrypt, \
encrypt_encode, init_keypair, init_pubkey, get_pubkey, NaclError
diff --git a/src/jmdaemon/daemon_protocol.py b/src/jmdaemon/daemon_protocol.py
index 9fdd641..30cb96a 100644
--- a/src/jmdaemon/daemon_protocol.py
+++ b/src/jmdaemon/daemon_protocol.py
@@ -549,6 +549,16 @@ class JMDaemonServerProtocol(amp.AMP, OrderbookWatch):
self.on_push_tx,
self.on_commitment_seen,
self.on_commitment_transferred)
+ self.mcc.register_frost_callbacks(self.on_dkginit,
+ self.on_dkgpmsg1,
+ self.on_dkgpmsg2,
+ self.on_dkgfinalized,
+ self.on_dkgcmsg1,
+ self.on_dkgcmsg2,
+ self.on_frostinit,
+ self.on_frostround1,
+ self.on_frostround2,
+ self.on_frostagg1)
self.mcc.set_daemon(self)
d = self.callRemote(JMInitProto,
nick_hash_length=NICK_HASH_LENGTH,
@@ -607,6 +617,73 @@ class JMDaemonServerProtocol(amp.AMP, OrderbookWatch):
self.jm_state = 0
return {'accepted': True}
+
+ """DKG specific responders
+ """
+ @JMDKGInit.responder
+ def on_JM_DKG_INIT(self, hostpubkeyhash, session_id, sig):
+ self.mcc.pubmsg(f'!dkginit {hostpubkeyhash} {session_id} {sig}')
+ return {'accepted': True}
+
+ @JMDKGPMsg1.responder
+ def on_JM_DKG_PMSG1(self, nick, hostpubkeyhash, session_id, sig, pmsg1):
+ msg = f'{hostpubkeyhash} {session_id} {sig} {pmsg1}'
+ self.mcc.prepare_privmsg(nick, "dkgpmsg1", msg)
+ return {'accepted': True}
+
+ @JMDKGPMsg2.responder
+ def on_JM_DKG_PMSG2(self, nick, session_id, pmsg2):
+ msg = f'{session_id} {pmsg2}'
+ self.mcc.prepare_privmsg(nick, "dkgpmsg2", msg)
+ return {'accepted': True}
+
+ @JMDKGFinalized.responder
+ def on_JM_DKG_FINALIZED(self, session_id, nick):
+ msg = f'{session_id}'
+ self.mcc.prepare_privmsg(nick, "dkgfinalized", msg)
+ return {'accepted': True}
+
+ @JMDKGCMsg1.responder
+ def on_JM_DKG_CMSG1(self, nick, session_id, cmsg1):
+ msg = f'{session_id} {cmsg1}'
+ self.mcc.prepare_privmsg(nick, "dkgcmsg1", msg)
+ return {'accepted': True}
+
+ @JMDKGCMsg2.responder
+ def on_JM_DKG_CMSG2(self, nick, session_id, cmsg2, ext_recovery):
+ msg = f'{session_id} {cmsg2} {ext_recovery}'
+ self.mcc.prepare_privmsg(nick, "dkgcmsg2", msg)
+ return {'accepted': True}
+
+
+ """FROST specific responders
+ """
+ @JMFROSTInit.responder
+ def on_JM_FROST_INIT(self, hostpubkeyhash, session_id, sig):
+ self.mcc.pubmsg(f'!frostinit {hostpubkeyhash} {session_id} {sig}')
+ return {'accepted': True}
+
+ @JMFROSTRound1.responder
+ def on_JM_FROST_ROUND1(self, nick, hostpubkeyhash,
+ session_id, sig, pub_nonce):
+ msg = f'{hostpubkeyhash} {session_id} {sig} {pub_nonce}'
+ self.mcc.prepare_privmsg(nick, "frostround1", msg)
+ return {'accepted': True}
+
+ @JMFROSTAgg1.responder
+ def on_JM_FROST_AGG1(self, nick, session_id,
+ nonce_agg, dkg_session_id, ids, msg):
+ msg = f'{session_id} {nonce_agg} {dkg_session_id} {ids} {msg}'
+ self.mcc.prepare_privmsg(nick, "frostagg1", msg)
+ return {'accepted': True}
+
+ @JMFROSTRound2.responder
+ def on_JM_FROST_ROUND2(self, nick, session_id, partial_sig):
+ msg = f'{session_id} {partial_sig}'
+ self.mcc.prepare_privmsg(nick, "frostround2", msg)
+ return {'accepted': True}
+
+
"""Taker specific responders
"""
@@ -737,6 +814,68 @@ class JMDaemonServerProtocol(amp.AMP, OrderbookWatch):
d = self.callRemote(JMUp)
self.defaultCallbacks(d)
+ # frost commands
+ def on_dkginit(self, nick, hostpubkeyhash, session_id, sig):
+ d = self.callRemote(JMDKGInitSeen,
+ nick=nick, hostpubkeyhash=hostpubkeyhash,
+ session_id=session_id, sig=sig)
+ self.defaultCallbacks(d)
+
+ def on_dkgpmsg1(self, nick, hostpubkeyhash, session_id, sig, pmsg1):
+ d = self.callRemote(JMDKGPMsg1Seen,
+ nick=nick, hostpubkeyhash=hostpubkeyhash,
+ session_id=session_id, sig=sig, pmsg1=pmsg1)
+ self.defaultCallbacks(d)
+
+ def on_dkgpmsg2(self, nick, session_id, pmsg2):
+ d = self.callRemote(JMDKGPMsg2Seen,
+ nick=nick, session_id=session_id, pmsg2=pmsg2)
+ self.defaultCallbacks(d)
+
+ def on_dkgfinalized(self, nick, session_id):
+ d = self.callRemote(JMDKGFinalizedSeen,
+ nick=nick, session_id=session_id)
+ self.defaultCallbacks(d)
+
+ def on_dkgcmsg1(self, nick, session_id, cmsg1):
+ d = self.callRemote(JMDKGCMsg1Seen,
+ nick=nick, session_id=session_id, cmsg1=cmsg1)
+ self.defaultCallbacks(d)
+
+ def on_dkgcmsg2(self, nick, session_id, cmsg2, ext_recovery):
+ d = self.callRemote(JMDKGCMsg2Seen,
+ nick=nick, session_id=session_id, cmsg2=cmsg2,
+ ext_recovery=ext_recovery)
+ self.defaultCallbacks(d)
+
+ def on_frostinit(self, nick, hostpubkeyhash, session_id, sig):
+ d = self.callRemote(JMFROSTInitSeen,
+ nick=nick, hostpubkeyhash=hostpubkeyhash,
+ session_id=session_id, sig=sig)
+ self.defaultCallbacks(d)
+
+ def on_frostround1(self, nick, hostpubkeyhash, session_id, sig, pub_nonce):
+ d = self.callRemote(JMFROSTRound1Seen,
+ nick=nick, hostpubkeyhash=hostpubkeyhash,
+ session_id=session_id, sig=sig,
+ pub_nonce=pub_nonce)
+ self.defaultCallbacks(d)
+
+ def on_frostround2(self, nick, session_id, partial_sig):
+ d = self.callRemote(JMFROSTRound2Seen,
+ nick=nick, session_id=session_id,
+ partial_sig=partial_sig)
+ self.defaultCallbacks(d)
+
+ def on_frostagg1(self, nick, session_id,
+ nonce_agg, dkg_session_id, ids, msg):
+ d = self.callRemote(JMFROSTAgg1Seen,
+ nick=nick, session_id=session_id,
+ nonce_agg=nonce_agg,
+ dkg_session_id=dkg_session_id, ids=ids, msg=msg)
+ self.defaultCallbacks(d)
+
+
@maker_only
def on_orderbook_requested(self, nick, mc=None):
"""Dealt with by daemon, assuming offerlist is up to date
diff --git a/src/jmdaemon/message_channel.py b/src/jmdaemon/message_channel.py
index 3734cce..c4f0e2a 100644
--- a/src/jmdaemon/message_channel.py
+++ b/src/jmdaemon/message_channel.py
@@ -609,6 +609,25 @@ class MessageChannelCollection(object):
on_order_fill, on_seen_auth, on_seen_tx,
on_push_tx, on_commitment_seen,
on_commitment_transferred)
+ # frost commands
+ def register_frost_callbacks(self,
+ on_dkginit=None,
+ on_dkgpmsg1=None,
+ on_dkgpmsg2=None,
+ on_dkgfinalized=None,
+ on_dkgcmsg1=None,
+ on_dkgcmsg2=None,
+ on_frostinit=None,
+ on_frostround1=None,
+ on_frostround2=None,
+ on_frostagg1=None):
+ for mc in self.mchannels:
+ mc.register_frost_callbacks(
+ on_dkginit,
+ on_dkgpmsg1, on_dkgpmsg2, on_dkgfinalized,
+ on_dkgcmsg1, on_dkgcmsg2,
+ on_frostinit,
+ on_frostround1, on_frostround2, on_frostagg1)
def on_verified_privmsg(self, nick, message, hostid):
"""Called from daemon when message was successfully verified,
@@ -666,6 +685,17 @@ class MessageChannel(object):
self.on_seen_auth = None
self.on_seen_tx = None
self.on_push_tx = None
+ # frost functions
+ self.on_dkginit = None
+ self.on_dkgpmsg1 = None
+ self.on_dkgpmsg2 = None
+ self.on_dkgfinalized = None
+ self.on_dkgcmsg1 = None
+ self.on_dkgcmsg2 = None
+ self.on_frostinit = None
+ self.on_frostround1 = None
+ self.on_frostround2 = None
+ self.on_frostagg1 = None
self.daemon = None
@@ -772,6 +802,29 @@ class MessageChannel(object):
self.on_commitment_seen = on_commitment_seen
self.on_commitment_transferred = on_commitment_transferred
+ # frost commands
+ def register_frost_callbacks(self,
+ on_dkginit=None,
+ on_dkgpmsg1=None,
+ on_dkgpmsg2=None,
+ on_dkgfinalized=None,
+ on_dkgcmsg1=None,
+ on_dkgcmsg2=None,
+ on_frostinit=None,
+ on_frostround1=None,
+ on_frostround2=None,
+ on_frostagg1=None):
+ self.on_dkginit = on_dkginit
+ self.on_dkgpmsg1 = on_dkgpmsg1
+ self.on_dkgpmsg2 = on_dkgpmsg2
+ self.on_dkgfinalized = on_dkgfinalized
+ self.on_dkgcmsg1 = on_dkgcmsg1
+ self.on_dkgcmsg2 = on_dkgcmsg2
+ self.on_frostinit = on_frostinit
+ self.on_frostround1 = on_frostround1
+ self.on_frostround2 = on_frostround2
+ self.on_frostagg1 = on_frostagg1
+
def announce_orders(self, orderlines):
self._announce_orders(orderlines)
@@ -889,7 +942,28 @@ class MessageChannel(object):
return
for command in commands:
_chunks = command.split(" ")
- if self.check_for_orders(nick, _chunks):
+ if _chunks[0] == 'dkginit':
+ try:
+ hostpubkeyhash = _chunks[1]
+ session_id = _chunks[2]
+ sig = _chunks[3]
+ if self.on_dkginit:
+ self.on_dkginit(nick, hostpubkeyhash, session_id, sig)
+ except (ValueError, IndexError) as e:
+ log.debug("!dkginit" + repr(e))
+ return
+ elif _chunks[0] == 'frostinit':
+ try:
+ hostpubkeyhash = _chunks[1]
+ session_id = _chunks[2]
+ sig = _chunks[3]
+ if self.on_frostinit:
+ self.on_frostinit(nick, hostpubkeyhash,
+ session_id, sig)
+ except (ValueError, IndexError) as e:
+ log.debug("!frostinit" + repr(e))
+ return
+ elif self.check_for_orders(nick, _chunks):
pass
if self.check_for_commitments(nick, _chunks):
pass
@@ -1057,6 +1131,59 @@ class MessageChannel(object):
return
if self.on_push_tx:
self.on_push_tx(nick, tx)
+
+ # frost commands
+ elif _chunks[0] == 'dkgpmsg1':
+ hostpubkeyhash = _chunks[1]
+ session_id = _chunks[2]
+ sig = _chunks[3]
+ pmsg1 = _chunks[4]
+ if self.on_dkgpmsg1:
+ self.on_dkgpmsg1(nick, hostpubkeyhash, session_id, sig,
+ pmsg1)
+ elif _chunks[0] == 'dkgpmsg2':
+ session_id = _chunks[1]
+ pmsg2 = _chunks[2]
+ if self.on_dkgpmsg2:
+ self.on_dkgpmsg2(nick, session_id, pmsg2)
+ elif _chunks[0] == 'dkgfinalized':
+ session_id = _chunks[1]
+ if self.on_dkgfinalized:
+ self.on_dkgfinalized(nick, session_id)
+ elif _chunks[0] == 'dkgcmsg1':
+ session_id = _chunks[1]
+ cmsg1 = _chunks[2]
+ if self.on_dkgcmsg1:
+ self.on_dkgcmsg1(nick, session_id, cmsg1)
+ elif _chunks[0] == 'dkgcmsg2':
+ session_id = _chunks[1]
+ cmsg2 = _chunks[2]
+ ext_recovery = _chunks[3]
+ if self.on_dkgcmsg2:
+ self.on_dkgcmsg2(nick, session_id, cmsg2, ext_recovery)
+ elif _chunks[0] == 'frostround1':
+ hostpubkeyhash = _chunks[1]
+ session_id = _chunks[2]
+ sig = _chunks[3]
+ pub_nonce = _chunks[4]
+ if self.on_frostround1:
+ self.on_frostround1(
+ nick, hostpubkeyhash, session_id, sig, pub_nonce)
+ elif _chunks[0] == 'frostagg1':
+ session_id = _chunks[1]
+ nonce_agg = _chunks[2]
+ dkg_session_id = _chunks[3]
+ ids = _chunks[4]
+ msg = _chunks[5]
+ if self.on_frostagg1:
+ self.on_frostagg1(
+ nick, session_id, nonce_agg,
+ dkg_session_id, ids, msg)
+ elif _chunks[0] == 'frostround2':
+ session_id = _chunks[1]
+ partial_sig = _chunks[2]
+ if self.on_frostround2:
+ self.on_frostround2(nick, session_id, partial_sig)
except (IndexError, ValueError):
# TODO proper error handling
log.debug('cj peer error TODO handle')
diff --git a/src/jmdaemon/protocol.py b/src/jmdaemon/protocol.py
index aa55d89..6c9afd8 100644
--- a/src/jmdaemon/protocol.py
+++ b/src/jmdaemon/protocol.py
@@ -37,11 +37,21 @@ NICK_MAX_ENCODED = 14 #comes from base58 expansion; recalculate if above change
#commitments; note multiple options may be used in future
COMMITMENT_PREFIXES = ["P"]
#Lists of valid commands
+dkg_public_list = ['dkginit']
+dkg_private_list = ['dkgpmsg1', 'dkgpmsg2', 'dkgcmsg1', 'dkgcmsg2',
+ 'dkgfinalized']
+frost_public_list = ['frostinit']
+frost_private_list = ['frostround1', 'frostround2', 'frostagg1']
encrypted_commands = ["auth", "ioauth", "tx", "sig"]
plaintext_commands = ["fill", "error", "pubkey", "orderbook", "push"]
commitment_broadcast_list = ["hp2"]
plaintext_commands += offername_list
plaintext_commands += commitment_broadcast_list
-public_commands = commitment_broadcast_list + ["orderbook", "cancel"
- ] + offername_list
+plaintext_commands += dkg_public_list
+plaintext_commands += dkg_private_list
+plaintext_commands += frost_public_list
+plaintext_commands += frost_private_list
+public_commands = commitment_broadcast_list + [
+ "orderbook", "cancel" ] + offername_list + [
+ dkg_public_list + frost_public_list]
private_commands = encrypted_commands + plaintext_commands
diff --git a/src/jmfrost/__init__.py b/src/jmfrost/__init__.py
new file mode 100644
index 0000000..bada04a
--- /dev/null
+++ b/src/jmfrost/__init__.py
@@ -0,0 +1,19 @@
+# -*- coding: utf-8 -*-
+
+# chilldkg_ref, secp256k1proto code is from
+# https://github.com/BlockstreamResearch/bip-frost-dkg
+#
+# commit 1731341f04157592e2f184cb00a37c4d331188e3
+# Author: Tim Ruffing
+# Date: Wed Dec 18 23:42:26 2024 +0100
+#
+# text: Use links for internal references
+
+# frost_ref is from
+# https://github.com/siv2r/bip-frost-signing
+#
+# commit 2f249969f84c1533671c521bf864fddecb371018
+# Author: siv2r
+# Date: Sat Dec 7 17:13:54 2024 +0530
+#
+# spec: add header, changelog, and acknowledgements
diff --git a/src/jmfrost/chilldkg_ref/__init__.py b/src/jmfrost/chilldkg_ref/__init__.py
new file mode 100644
index 0000000..daf5b85
--- /dev/null
+++ b/src/jmfrost/chilldkg_ref/__init__.py
@@ -0,0 +1,3 @@
+# -*- coding: utf-8 -*-
+
+__all__ = ["chilldkg"]
diff --git a/src/jmfrost/chilldkg_ref/chilldkg.py b/src/jmfrost/chilldkg_ref/chilldkg.py
new file mode 100644
index 0000000..a33b997
--- /dev/null
+++ b/src/jmfrost/chilldkg_ref/chilldkg.py
@@ -0,0 +1,841 @@
+"""Reference implementation of ChillDKG.
+
+WARNING: This code is slow and trivially vulnerable to side channel attacks. Do
+not use for anything but tests.
+
+The public API consists of all functions with docstrings, including the types in
+their arguments and return values, and the exceptions they raise; see also the
+`__all__` list. All other definitions are internal.
+"""
+
+from secrets import token_bytes as random_bytes
+from typing import Any, Tuple, List, NamedTuple, NewType, Optional, NoReturn, Dict
+
+from ..secp256k1proto.secp256k1 import Scalar, GE
+from ..secp256k1proto.bip340 import schnorr_sign, schnorr_verify
+from ..secp256k1proto.keys import pubkey_gen_plain
+from ..secp256k1proto.util import int_from_bytes, bytes_from_int
+
+from .vss import VSSCommitment
+from . import encpedpop
+from .util import (
+ BIP_TAG,
+ tagged_hash_bip_dkg,
+ ProtocolError,
+ FaultyParticipantOrCoordinatorError,
+ FaultyCoordinatorError,
+ UnknownFaultyParticipantOrCoordinatorError,
+ FaultyParticipantError,
+)
+
+__all__ = [
+ # Functions
+ "hostpubkey_gen",
+ "params_id",
+ "participant_step1",
+ "participant_step2",
+ "participant_finalize",
+ "participant_investigate",
+ "coordinator_step1",
+ "coordinator_finalize",
+ "coordinator_investigate",
+ "recover",
+ # Exceptions
+ "HostSeckeyError",
+ "SessionParamsError",
+ "InvalidHostPubkeyError",
+ "DuplicateHostPubkeyError",
+ "ThresholdOrCountError",
+ "ProtocolError",
+ "FaultyParticipantOrCoordinatorError",
+ "FaultyCoordinatorError",
+ "UnknownFaultyParticipantOrCoordinatorError",
+ "RecoveryDataError",
+ # Types
+ "SessionParams",
+ "DKGOutput",
+ "ParticipantMsg1",
+ "ParticipantMsg2",
+ "CoordinatorInvestigationMsg",
+ "ParticipantState1",
+ "ParticipantState2",
+ "CoordinatorMsg1",
+ "CoordinatorMsg2",
+ "CoordinatorState",
+ "RecoveryData",
+]
+
+
+###
+### Equality check protocol CertEq
+###
+
+
+def certeq_message(x: bytes, idx: int) -> bytes:
+ # Domain separation as described in BIP 340
+ prefix = (BIP_TAG + "certeq message").encode()
+ prefix = prefix + b"\x00" * (33 - len(prefix))
+ return prefix + idx.to_bytes(4, "big") + x
+
+
+def certeq_participant_step(hostseckey: bytes, idx: int, x: bytes) -> bytes:
+ msg = certeq_message(x, idx)
+ return schnorr_sign(msg, hostseckey, aux_rand=random_bytes(32))
+
+
+def certeq_cert_len(n: int) -> int:
+ return 64 * n
+
+
+def certeq_verify(hostpubkeys: List[bytes], x: bytes, cert: bytes) -> None:
+ n = len(hostpubkeys)
+ if len(cert) != certeq_cert_len(n):
+ raise ValueError
+ for i in range(n):
+ msg = certeq_message(x, i)
+ valid = schnorr_verify(
+ msg,
+ hostpubkeys[i][1:33],
+ cert[i * 64 : (i + 1) * 64],
+ )
+ if not valid:
+ raise InvalidSignatureInCertificateError(i)
+
+
+def certeq_coordinator_step(sigs: List[bytes]) -> bytes:
+ cert = b"".join(sigs)
+ return cert
+
+
+class InvalidSignatureInCertificateError(ValueError):
+ def __init__(self, participant: int, *args: Any):
+ self.participant = participant
+ super().__init__(participant, *args)
+
+
+###
+### Host keys
+###
+
+
+def hostpubkey_gen(hostseckey: bytes) -> bytes:
+ """Compute the participant's host public key from the host secret key.
+
+ The host public key is the long-term cryptographic identity of the
+ participant.
+
+ This function interprets `hostseckey` as big-endian integer, and computes
+ the corresponding "plain" public key in compressed serialization (33 bytes,
+ starting with 0x02 or 0x03). This is the key generation procedure
+ traditionally used in Bitcoin, e.g., for ECDSA. In other words, this
+ function is equivalent to `IndividualPubkey` as defined in
+ [[BIP 327](https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki#key-generation-of-an-individual-signer)].
+ TODO Refer to the FROST signing BIP instead, once that one has a number.
+
+ Arguments:
+ hostseckey: This participant's long-term secret key (32 bytes).
+ The key **must** be 32 bytes of cryptographically secure randomness
+ with sufficient entropy to be unpredictable. All outputs of a
+ successful participant in a session can be recovered from (a backup
+ of) the key and per-session recovery data.
+
+ The same host secret key (and thus the same host public key) can be
+ used in multiple DKG sessions. A host public key can be correlated
+ to the threshold public key resulting from a DKG session only by
+ parties who observed the session, namely the participants, the
+ coordinator (and any eavesdropper).
+
+ Returns:
+ The host public key (33 bytes).
+
+ Raises:
+ HostSeckeyError: If the length of `hostseckey` is not 32 bytes.
+ """
+ if len(hostseckey) != 32:
+ raise HostSeckeyError
+
+ return pubkey_gen_plain(hostseckey)
+
+
+class HostSeckeyError(ValueError):
+ """Raised if the length of a host secret key is not 32 bytes."""
+
+
+###
+### Session input and outputs
+###
+
+
+# It would be more idiomatic Python to make this a real (data)class, perform
+# data validation in the constructor, and add methods to it, but let's stick to
+# simple tuples in the public API in order to keep it approachable to readers
+# who are not too familiar with Python.
+class SessionParams(NamedTuple):
+ """A `SessionParams` tuple holds the common parameters of a DKG session.
+
+ Attributes:
+ hostpubkeys: Ordered list of the host public keys of all participants.
+ t: The participation threshold `t`.
+ This is the number of participants that will be required to sign.
+ It must hold that `1 <= t <= len(hostpubkeys) <= 2**32 - 1`.
+
+ Participants **must** ensure that they have obtained authentic host
+ public keys of all the other participants in the session to make
+ sure that they run the DKG and generate a threshold public key with
+ the intended set of participants. This is analogous to traditional
+ threshold signatures (known as "multisig" in the Bitcoin community),
+ [[BIP 383](https://github.com/bitcoin/bips/blob/master/bip-0383.mediawiki)],
+ where the participants need to obtain authentic extended public keys
+ ("xpubs") from the other participants to generate multisig
+ addresses, or MuSig2
+ [[BIP 327](https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki)],
+ where the participants need to obtain authentic individual public
+ keys of the other participants to generate an aggregated public key.
+
+ A DKG session will fail if the participants and the coordinator in a session
+ don't have the `hostpubkeys` in the same order. This will make sure that
+ honest participants agree on the order as part of the session, which is
+ useful if the order carries an implicit meaning in the application (e.g., if
+ the first `t` participants are the primary participants for signing and the
+ others are fallback participants). If there is no canonical order of the
+ participants in the application, the caller can sort the list of host public
+ keys with the [KeySort algorithm specified in
+ BIP 327](https://github.com/bitcoin/bips/blob/master/bip-0327.mediawiki#key-sorting)
+ to abstract away from the order.
+ """
+
+ hostpubkeys: List[bytes]
+ t: int
+
+
+def params_validate(params: SessionParams) -> None:
+ (hostpubkeys, t) = params
+
+ if not (1 <= t <= len(hostpubkeys) <= 2**32 - 1):
+ raise ThresholdOrCountError
+
+ # Check that all hostpubkeys are valid
+ for i, hostpubkey in enumerate(hostpubkeys):
+ try:
+ _ = GE.from_bytes_compressed(hostpubkey)
+ except ValueError as e:
+ raise InvalidHostPubkeyError(i) from e
+
+ # Check for duplicate hostpubkeys and find the corresponding indices
+ hostpubkey_to_idx: Dict[bytes, int] = dict()
+ for i, hostpubkey in enumerate(hostpubkeys):
+ if hostpubkey in hostpubkey_to_idx:
+ raise DuplicateHostPubkeyError(hostpubkey_to_idx[hostpubkey], i)
+ hostpubkey_to_idx[hostpubkey] = i
+
+
+def params_id(params: SessionParams) -> bytes:
+ """Return the parameters ID, a unique representation of the `SessionParams`.
+
+ In the common scenario that the participants obtain host public keys from
+ the other participants over channels that do not provide end-to-end
+ authentication of the sending participant (e.g., if the participants simply
+ send their unauthenticated host public keys to the coordinator, who is
+ supposed to relay them to all participants), the parameters ID serves as a
+ convenient way to perform an out-of-band comparison of all host public keys.
+ It is a collision-resistant cryptographic hash of the `SessionParams`
+ tuple. As a result, if all participants have obtained an identical
+ parameters ID (as can be verified out-of-band), then they all agree on all
+ host public keys and the threshold `t`, and in particular, all participants
+ have obtained authentic public host keys.
+
+ Returns:
+ bytes: The parameters ID, a 32-byte string.
+
+ Raises:
+ InvalidHostPubkeyError: If `hostpubkeys` contains an invalid public key.
+ DuplicateHostPubkeyError: If `hostpubkeys` contains duplicates.
+ ThresholdOrCountError: If `1 <= t <= len(hostpubkeys) <= 2**32 - 1` does
+ not hold.
+ """
+ params_validate(params)
+ hostpubkeys, t = params
+
+ t_bytes = t.to_bytes(4, byteorder="big")
+ params_id = tagged_hash_bip_dkg(
+ "params_id",
+ t_bytes + b"".join(hostpubkeys),
+ )
+ assert len(params_id) == 32
+ return params_id
+
+
+class SessionParamsError(ValueError):
+ """Base exception for invalid `SessionParams` tuples."""
+
+
+class DuplicateHostPubkeyError(SessionParamsError):
+ """Raised if two participants have identical host public keys.
+
+ This exception is raised when two participants have an identical host public
+ key in the `SessionParams` tuple. Assuming the host public keys in question
+ have been transmitted correctly, this exception implies that at least one of
+ the two participants is faulty (because duplicates occur only with
+ negligible probability if keys are generated honestly).
+
+ Attributes:
+ participant1 (int): Index of the first participant.
+ participant2 (int): Index of the second participant.
+ """
+
+ def __init__(self, participant1: int, participant2: int, *args: Any):
+ self.participant1 = participant1
+ self.participant2 = participant2
+ super().__init__(participant1, participant2, *args)
+
+
+class InvalidHostPubkeyError(SessionParamsError):
+ """Raised if a host public key is invalid.
+
+ This exception is raised when a host public key in the `SessionParams` tuple
+ is not a valid public key in compressed serialization. Assuming the host
+ public keys in question has been transmitted correctly, this exception
+ implies that the corresponding participant is faulty.
+
+ Attributes:
+ participant (int): Index of the participant.
+ """
+
+ def __init__(self, participant: int, *args: Any):
+ self.participant = participant
+ super().__init__(participant, *args)
+
+
+class ThresholdOrCountError(SessionParamsError):
+ """Raised if `1 <= t <= len(hostpubkeys) <= 2**32 - 1` does not hold."""
+
+
+# This is really the same definition as in simplpedpop and encpedpop. We repeat
+# it here only to have its docstring in this module.
+class DKGOutput(NamedTuple):
+ """Holds the outputs of a DKG session.
+
+ Attributes:
+ secshare: Secret share of the participant (or `None` for coordinator)
+ threshold_pubkey: Generated threshold public key representing the group
+ pubshares: Public shares of the participants
+ """
+
+ secshare: Optional[bytes]
+ threshold_pubkey: bytes
+ pubshares: List[bytes]
+
+
+RecoveryData = NewType("RecoveryData", bytes)
+
+
+###
+### Messages
+###
+
+
+class ParticipantMsg1(NamedTuple):
+ enc_pmsg: encpedpop.ParticipantMsg
+
+
+class ParticipantMsg2(NamedTuple):
+ sig: bytes
+
+
+class CoordinatorMsg1(NamedTuple):
+ enc_cmsg: encpedpop.CoordinatorMsg
+ enc_secshares: List[Scalar]
+
+
+class CoordinatorMsg2(NamedTuple):
+ cert: bytes
+
+
+class CoordinatorInvestigationMsg(NamedTuple):
+ enc_cinv: encpedpop.CoordinatorInvestigationMsg
+
+
+def deserialize_recovery_data(
+ b: bytes,
+) -> Tuple[int, VSSCommitment, List[bytes], List[bytes], List[Scalar], bytes]:
+ rest = b
+
+ # Read t (4 bytes)
+ if len(rest) < 4:
+ raise ValueError
+ t, rest = int.from_bytes(rest[:4], byteorder="big"), rest[4:]
+
+ # Read sum_coms (33*t bytes)
+ if len(rest) < 33 * t:
+ raise ValueError
+ sum_coms, rest = (
+ VSSCommitment.from_bytes_and_t(rest[: 33 * t], t),
+ rest[33 * t :],
+ )
+
+ # Compute n
+ n, remainder = divmod(len(rest), (33 + 33 + 32 + 64))
+ if remainder != 0:
+ raise ValueError
+
+ # Read hostpubkeys (33*n bytes)
+ if len(rest) < 33 * n:
+ raise ValueError
+ hostpubkeys, rest = [rest[i : i + 33] for i in range(0, 33 * n, 33)], rest[33 * n :]
+
+ # Read pubnonces (33*n bytes)
+ if len(rest) < 33 * n:
+ raise ValueError
+ pubnonces, rest = [rest[i : i + 33] for i in range(0, 33 * n, 33)], rest[33 * n :]
+
+ # Read enc_secshares (32*n bytes)
+ if len(rest) < 32 * n:
+ raise ValueError
+ enc_secshares, rest = (
+ [Scalar(int_from_bytes(rest[i : i + 32])) for i in range(0, 32 * n, 32)],
+ rest[32 * n :],
+ )
+
+ # Read cert
+ cert_len = certeq_cert_len(n)
+ if len(rest) < cert_len:
+ raise ValueError
+ cert, rest = rest[:cert_len], rest[cert_len:]
+
+ if len(rest) != 0:
+ raise ValueError
+ return (t, sum_coms, hostpubkeys, pubnonces, enc_secshares, cert)
+
+
+###
+### Participant
+###
+
+
+class ParticipantState1(NamedTuple):
+ params: SessionParams
+ idx: int
+ enc_state: encpedpop.ParticipantState
+
+
+class ParticipantState2(NamedTuple):
+ params: SessionParams
+ eq_input: bytes
+ dkg_output: DKGOutput
+
+
+def participant_step1(
+ hostseckey: bytes, params: SessionParams, random: bytes
+) -> Tuple[ParticipantState1, ParticipantMsg1]:
+ """Perform a participant's first step of a ChillDKG session.
+
+ Arguments:
+ hostseckey: Participant's long-term host secret key (32 bytes).
+ params: Common session parameters.
+ random: FRESH random byte string (32 bytes).
+
+ Returns:
+ ParticipantState1: The participant's session state after this step, to
+ be passed as an argument to `participant_step2`. The state **must
+ not** be reused (i.e., it must be passed only to one
+ `participant_step2` call).
+ ParticipantMsg1: The first message to be sent to the coordinator.
+
+ Raises:
+ HostSeckeyError: If the length of `hostseckey` is not 32 bytes or if
+ `hostseckey` does not match any entry of `hostpubkeys`.
+ InvalidHostPubkeyError: If `hostpubkeys` contains an invalid public key.
+ DuplicateHostPubkeyError: If `hostpubkeys` contains duplicates.
+ ThresholdOrCountError: If `1 <= t <= len(hostpubkeys) <= 2**32 - 1` does
+ not hold.
+ """
+ hostpubkey = hostpubkey_gen(hostseckey) # HostSeckeyError if len(hostseckey) != 32
+
+ params_validate(params)
+ (hostpubkeys, t) = params
+
+ try:
+ idx = hostpubkeys.index(hostpubkey)
+ except ValueError as e:
+ raise HostSeckeyError(
+ "Host secret key does not match any host public key"
+ ) from e
+ enc_state, enc_pmsg = encpedpop.participant_step1(
+ # We know that EncPedPop uses its seed only by feeding it to a hash
+ # function. Thus, it is sufficient that the seed has a high entropy,
+ # and so we can simply pass the hostseckey as seed.
+ seed=hostseckey,
+ deckey=hostseckey,
+ t=t,
+ # This requires the joint security of Schnorr signatures and ECDH.
+ enckeys=hostpubkeys,
+ idx=idx,
+ random=random,
+ ) # HostSeckeyError if len(hostseckey) != 32
+ state1 = ParticipantState1(params, idx, enc_state)
+ return state1, ParticipantMsg1(enc_pmsg)
+
+
+def participant_step2(
+ hostseckey: bytes,
+ state1: ParticipantState1,
+ cmsg1: CoordinatorMsg1,
+) -> Tuple[ParticipantState2, ParticipantMsg2]:
+ """Perform a participant's second step of a ChillDKG session.
+
+ **Warning:**
+ After sending the returned message to the coordinator, this participant
+ **must not** erase the hostseckey, even if this participant does not receive
+ the coordinator reply needed for the `participant_finalize` call. The
+ underlying reason is that some other participant may receive the coordinator
+ reply, deem the DKG session successful and use the resulting threshold
+ public key (e.g., by sending funds to it). If the coordinator reply remains
+ missing, that other participant can, at any point in the future, convince
+ this participant of the success of the DKG session by presenting recovery
+ data, from which this participant can recover the DKG output using the
+ `recover` function.
+
+ Arguments:
+ hostseckey: Participant's long-term host secret key (32 bytes).
+ state1: The participant's session state as output by
+ `participant_step1`.
+ cmsg1: The first message received from the coordinator.
+
+ Returns:
+ ParticipantState2: The participant's session state after this step, to
+ be passed as an argument to `participant_finalize`. The state **must
+ not** be reused (i.e., it must be passed only to one
+ `participant_finalize` call).
+ ParticipantMsg2: The second message to be sent to the coordinator.
+
+ Raises:
+ HostSeckeyError: If the length of `hostseckey` is not 32 bytes.
+ FaultyParticipantOrCoordinatorError: If another known participant or the
+ coordinator is faulty. See the documentation of the exception for
+ further details.
+ UnknownFaultyParticipantOrCoordinatorError: If another unknown
+ participant or the coordinator is faulty, but running the optional
+ investigation procedure of the protocol is necessary to determine a
+ suspected participant. See the documentation of the exception for
+ further details.
+ """
+ params, idx, enc_state = state1
+ enc_cmsg, enc_secshares = cmsg1
+
+ enc_dkg_output, eq_input = encpedpop.participant_step2(
+ state=enc_state,
+ deckey=hostseckey,
+ cmsg=enc_cmsg,
+ enc_secshare=enc_secshares[idx],
+ )
+
+ # Include the enc_shares in eq_input to ensure that participants agree on
+ # all shares, which in turn ensures that they have the right recovery data.
+ eq_input += b"".join([bytes_from_int(int(share)) for share in enc_secshares])
+ dkg_output = DKGOutput._make(enc_dkg_output)
+ state2 = ParticipantState2(params, eq_input, dkg_output)
+ sig = certeq_participant_step(hostseckey, idx, eq_input)
+ pmsg2 = ParticipantMsg2(sig)
+ return state2, pmsg2
+
+
+def participant_finalize(
+ state2: ParticipantState2, cmsg2: CoordinatorMsg2
+) -> Tuple[DKGOutput, RecoveryData]:
+ """Perform a participant's final step of a ChillDKG session.
+
+ If this function returns properly (without an exception), then this
+ participant deems the DKG session successful. It is, however, possible that
+ other participants have received a `cmsg2` from the coordinator that made
+ them raise an exception instead, or that they have not received a `cmsg2`
+ from the coordinator at all. These participants can, at any point in time in
+ the future (e.g., when initiating a signing session), be convinced to deem
+ the session successful by presenting the recovery data to them, from which
+ they can recover the DKG outputs using the `recover` function.
+
+ **Warning:**
+ Changing perspectives, this implies that, even when obtaining an exception,
+ this participant **must not** conclude that the DKG session has failed, and
+ as a consequence, this particiant **must not** erase the hostseckey. The
+ underlying reason is that some other participant may deem the DKG session
+ successful and use the resulting threshold public key (e.g., by sending
+ funds to it). That other participant can, at any point in the future,
+ convince this participant of the success of the DKG session by presenting
+ recovery data to this participant.
+
+ Arguments:
+ state2: The participant's state as output by `participant_step2`.
+
+ Returns:
+ DKGOutput: The DKG output.
+ bytes: The serialized recovery data.
+
+ Raises:
+ FaultyParticipantOrCoordinatorError: If another known participant or the
+ coordinator is faulty. Make sure to read the above warning, and see
+ the documentation of the exception for further details.
+ FaultyCoordinatorError: If the coordinator is faulty. Make sure to read
+ the above warning, and see the documentation of the exception for
+ further details.
+ """
+ params, eq_input, dkg_output = state2
+ try:
+ certeq_verify(params.hostpubkeys, eq_input, cmsg2.cert)
+ except InvalidSignatureInCertificateError as e:
+ raise FaultyParticipantOrCoordinatorError(
+ e.participant,
+ "Participant has provided an invalid signature for the certificate",
+ ) from e
+ return dkg_output, RecoveryData(eq_input + cmsg2.cert)
+
+
+def participant_investigate(
+ error: UnknownFaultyParticipantOrCoordinatorError,
+ cinv: CoordinatorInvestigationMsg,
+) -> NoReturn:
+ """Investigate who is to blame for a failed ChillDKG session.
+
+ This function can optionally be called when `participant_step2` raises
+ `UnknownFaultyParticipantOrCoordinatorError`. It narrows down the suspected
+ faulty parties by analyzing the investigation message provided by the coordinator.
+
+ This function does not return normally. Instead, it raises one of two
+ exceptions.
+
+ Arguments:
+ error: `UnknownFaultyParticipantOrCoordinatorError` raised by
+ `participant_step2`.
+ cinv: Coordinator investigation message for this participant as output
+ by `coordinator_investigate`.
+
+ Raises:
+ FaultyParticipantOrCoordinatorError: If another known participant or the
+ coordinator is faulty. See the documentation of the exception for
+ further details.
+ FaultyCoordinatorError: If the coordinator is faulty. See the
+ documentation of the exception for further details.
+ """
+ assert isinstance(error.inv_data, encpedpop.ParticipantInvestigationData)
+ encpedpop.participant_investigate(
+ error=error,
+ cinv=cinv.enc_cinv,
+ )
+
+
+###
+### Coordinator
+###
+
+
+class CoordinatorState(NamedTuple):
+ params: SessionParams
+ eq_input: bytes
+ dkg_output: DKGOutput
+
+
+def coordinator_step1(
+ pmsgs1: List[ParticipantMsg1], params: SessionParams
+) -> Tuple[CoordinatorState, CoordinatorMsg1]:
+ """Perform the coordinator's first step of a ChillDKG session.
+
+ Arguments:
+ pmsgs1: List of first messages received from the participants.
+ params: Common session parameters.
+
+ Returns:
+ CoordinatorState: The coordinator's session state after this step, to be
+ passed as an argument to `coordinator_finalize`. The state is not
+ supposed to be reused (i.e., it should be passed only to one
+ `coordinator_finalize` call).
+ CoordinatorMsg1: The first message to be sent to all participants.
+
+ Raises:
+ InvalidHostPubkeyError: If `hostpubkeys` contains an invalid public key.
+ DuplicateHostPubkeyError: If `hostpubkeys` contains duplicates.
+ ThresholdOrCountError: If `1 <= t <= len(hostpubkeys) <= 2**32 - 1` does
+ not hold.
+ """
+ params_validate(params)
+ hostpubkeys, t = params
+
+ enc_cmsg, enc_dkg_output, eq_input, enc_secshares = encpedpop.coordinator_step(
+ pmsgs=[pmsg1.enc_pmsg for pmsg1 in pmsgs1],
+ t=t,
+ enckeys=hostpubkeys,
+ )
+ eq_input += b"".join([bytes_from_int(int(share)) for share in enc_secshares])
+ dkg_output = DKGOutput._make(enc_dkg_output) # Convert to chilldkg.DKGOutput type
+ state = CoordinatorState(params, eq_input, dkg_output)
+ cmsg1 = CoordinatorMsg1(enc_cmsg, enc_secshares)
+ return state, cmsg1
+
+
+def coordinator_finalize(
+ state: CoordinatorState, pmsgs2: List[ParticipantMsg2]
+) -> Tuple[CoordinatorMsg2, DKGOutput, RecoveryData]:
+ """Perform the coordinator's final step of a ChillDKG session.
+
+ If this function returns properly (without an exception), then the
+ coordinator deems the DKG session successful. The returned `CoordinatorMsg2`
+ is supposed to be sent to all participants, who are supposed to pass it as
+ input to the `participant_finalize` function. It is, however, possible that
+ some participants pass a wrong and invalid message to `participant_finalize`
+ (e.g., because the message is transmitted incorrectly). These participants
+ can, at any point in time in the future (e.g., when initiating a signing
+ session), be convinced to deem the session successful by presenting the
+ recovery data to them, from which they can recover the DKG outputs using the
+ `recover` function.
+
+ If this function raises an exception, then the DKG session was not
+ successful from the perspective of the coordinator. In this case, it is, in
+ principle, possible to recover the DKG outputs of the coordinator using the
+ recovery data from a successful participant, should one exist. Any such
+ successful participant is either faulty, or has received messages from
+ other participants via a communication channel beside the coordinator.
+
+ Arguments:
+ state: The coordinator's session state as output by `coordinator_step1`.
+ pmsgs2: List of second messages received from the participants.
+
+ Returns:
+ CoordinatorMsg2: The second message to be sent to all participants.
+ DKGOutput: The DKG output. Since the coordinator does not have a secret
+ share, the DKG output will have the `secshare` field set to `None`.
+ bytes: The serialized recovery data.
+
+ Raises:
+ FaultyParticipantError: If another known participant or the coordinator
+ is faulty. See the documentation of the exception for further
+ details.
+ """
+ params, eq_input, dkg_output = state
+ cert = certeq_coordinator_step([pmsg2.sig for pmsg2 in pmsgs2])
+ try:
+ certeq_verify(params.hostpubkeys, eq_input, cert)
+ except InvalidSignatureInCertificateError as e:
+ raise FaultyParticipantError(
+ e.participant,
+ "Participant has provided an invalid signature for the certificate",
+ ) from e
+ return CoordinatorMsg2(cert), dkg_output, RecoveryData(eq_input + cert)
+
+
+def coordinator_investigate(
+ pmsgs: List[ParticipantMsg1],
+) -> List[CoordinatorInvestigationMsg]:
+ """Generate investigation messages for a ChillDKG session.
+
+ The investigation messages will allow the participants to investigate who is
+ to blame for a failed ChillDKG session (see `participant_investigate`).
+
+ Each message is intended for a single participant but can be safely
+ broadcast to all participants because the messages contain no confidential
+ information.
+
+ Arguments:
+ pmsgs: List of first messages received from the participants.
+
+ Returns:
+ List[CoordinatorInvestigationMsg]: A list of investigation messages, each
+ intended for a single participant.
+ """
+ enc_cinvs = encpedpop.coordinator_investigate([pmsg.enc_pmsg for pmsg in pmsgs])
+ return [CoordinatorInvestigationMsg(enc_cinv) for enc_cinv in enc_cinvs]
+
+
+###
+### Recovery
+###
+
+
+def recover(
+ hostseckey: Optional[bytes], recovery_data: RecoveryData
+) -> Tuple[DKGOutput, SessionParams]:
+ """Recover the DKG output of a ChillDKG session.
+
+ This function serves two different purposes:
+ 1. To recover from an exception in `participant_finalize` or
+ `coordinator_finalize`, after obtaining the recovery data from another
+ participant or the coordinator. See `participant_finalize` and
+ `coordinator_finalize` for background.
+ 2. To reproduce the DKG outputs on a new device, e.g., to recover from a
+ backup after data loss.
+
+ Arguments:
+ hostseckey: This participant's long-term host secret key (32 bytes) or
+ `None` if recovering the coordinator.
+ recovery_data: Recovery data from a successful session.
+
+ Returns:
+ DKGOutput: The recovered DKG output.
+ SessionParams: The common parameters of the recovered session.
+
+ Raises:
+ HostSeckeyError: If the length of `hostseckey` is not 32 bytes or if
+ `hostseckey` does not match the recovery data. (This can also
+ occur if the recovery data is invalid.)
+ RecoveryDataError: If recovery failed due to invalid recovery data.
+ """
+ try:
+ (t, sum_coms, hostpubkeys, pubnonces, enc_secshares, cert) = (
+ deserialize_recovery_data(recovery_data)
+ )
+ except Exception as e:
+ raise RecoveryDataError("Failed to deserialize recovery data") from e
+
+ n = len(hostpubkeys)
+ params = SessionParams(hostpubkeys, t)
+ try:
+ params_validate(params)
+ except SessionParamsError as e:
+ raise RecoveryDataError("Invalid session parameters in recovery data") from e
+
+ # Verify cert
+ eq_input = recovery_data[: -len(cert)]
+ try:
+ certeq_verify(hostpubkeys, eq_input, cert)
+ except InvalidSignatureInCertificateError as e:
+ raise RecoveryDataError("Invalid certificate in recovery data") from e
+
+ # Compute threshold pubkey and individual pubshares
+ sum_coms, tweak, _ = sum_coms.invalid_taproot_commit()
+ threshold_pubkey = sum_coms.commitment_to_secret()
+ pubshares = [sum_coms.pubshare(i) for i in range(n)]
+
+ if hostseckey:
+ hostpubkey = hostpubkey_gen(hostseckey) # HostSeckeyError
+ try:
+ idx = hostpubkeys.index(hostpubkey)
+ except ValueError as e:
+ raise HostSeckeyError(
+ "Host secret key does not match any host public key in the recovery data"
+ ) from e
+
+ # Decrypt share
+ enc_context = encpedpop.serialize_enc_context(t, hostpubkeys)
+ secshare = encpedpop.decrypt_sum(
+ hostseckey,
+ hostpubkeys[idx],
+ pubnonces,
+ enc_context,
+ idx,
+ enc_secshares[idx],
+ )
+ secshare_tweaked = secshare + tweak
+
+ # This is just a sanity check. Our signature is valid, so we have done
+ # an equivalent check already during the actual session.
+ assert VSSCommitment.verify_secshare(secshare_tweaked, pubshares[idx])
+ else:
+ secshare_tweaked = None
+
+ dkg_output = DKGOutput(
+ None if secshare_tweaked is None else secshare_tweaked.to_bytes(),
+ threshold_pubkey.to_bytes_compressed(),
+ [pubshare.to_bytes_compressed() for pubshare in pubshares],
+ )
+ return dkg_output, params
+
+
+class RecoveryDataError(ValueError):
+ """Raised if the recovery data is invalid."""
diff --git a/src/jmfrost/chilldkg_ref/encpedpop.py b/src/jmfrost/chilldkg_ref/encpedpop.py
new file mode 100644
index 0000000..c97a90b
--- /dev/null
+++ b/src/jmfrost/chilldkg_ref/encpedpop.py
@@ -0,0 +1,336 @@
+from typing import Tuple, List, NamedTuple, NoReturn
+
+from ..secp256k1proto.secp256k1 import Scalar, GE
+from ..secp256k1proto.ecdh import ecdh_libsecp256k1
+from ..secp256k1proto.keys import pubkey_gen_plain
+from ..secp256k1proto.util import int_from_bytes
+
+from . import simplpedpop
+from .util import (
+ UnknownFaultyParticipantOrCoordinatorError,
+ tagged_hash_bip_dkg,
+ FaultyParticipantOrCoordinatorError,
+ FaultyCoordinatorError,
+)
+
+
+###
+### Encryption
+###
+
+
+def ecdh(
+ seckey: bytes, my_pubkey: bytes, their_pubkey: bytes, context: bytes, sending: bool
+) -> Scalar:
+ data = ecdh_libsecp256k1(seckey, their_pubkey)
+ if sending:
+ data += my_pubkey + their_pubkey
+ else:
+ data += their_pubkey + my_pubkey
+ assert len(data) == 32 + 2 * 33
+ data += context
+ return Scalar(int_from_bytes(tagged_hash_bip_dkg("encpedpop ecdh", data)))
+
+
+def self_pad(symkey: bytes, nonce: bytes, context: bytes) -> Scalar:
+ # Pad for symmetric encryption to ourselves
+ return Scalar(
+ int_from_bytes(
+ tagged_hash_bip_dkg("encaps_multi self_pad", symkey + nonce + context)
+ )
+ )
+
+
+def encaps_multi(
+ secnonce: bytes,
+ pubnonce: bytes,
+ deckey: bytes,
+ enckeys: List[bytes],
+ context: bytes,
+ idx: int,
+) -> List[Scalar]:
+ # This is effectively the "Hashed ElGamal" multi-recipient KEM described in
+ # Section 5 of "Multi-recipient encryption, revisited" by Alexandre Pinto,
+ # Bertram Poettering, Jacob C. N. Schuldt (AsiaCCS 2014). Its crucial
+ # feature is to feed the index of the enckey to the hash function. The only
+ # difference is that we feed also the pubnonce and context data into the
+ # hash function.
+ pads = []
+ for i, enckey in enumerate(enckeys):
+ context_ = i.to_bytes(4, byteorder="big") + context
+ if i == idx:
+ # We're encrypting to ourselves, so we use a symmetrically derived
+ # pad to save the ECDH computation.
+ pad = self_pad(symkey=deckey, nonce=pubnonce, context=context_)
+ else:
+ pad = ecdh(
+ seckey=secnonce,
+ my_pubkey=pubnonce,
+ their_pubkey=enckey,
+ context=context_,
+ sending=True,
+ )
+ pads.append(pad)
+ return pads
+
+
+def encrypt_multi(
+ secnonce: bytes,
+ pubnonce: bytes,
+ deckey: bytes,
+ enckeys: List[bytes],
+ context: bytes,
+ idx: int,
+ plaintexts: List[Scalar],
+) -> List[Scalar]:
+ pads = encaps_multi(secnonce, pubnonce, deckey, enckeys, context, idx)
+ assert len(plaintexts) == len(pads)
+ ciphertexts = [plaintext + pad for plaintext, pad in zip(plaintexts, pads)]
+ return ciphertexts
+
+
+def decaps_multi(
+ deckey: bytes,
+ enckey: bytes,
+ pubnonces: List[bytes],
+ context: bytes,
+ idx: int,
+) -> List[Scalar]:
+ context_ = idx.to_bytes(4, byteorder="big") + context
+ pads = []
+ for sender_idx, pubnonce in enumerate(pubnonces):
+ if sender_idx == idx:
+ pad = self_pad(symkey=deckey, nonce=pubnonce, context=context_)
+ else:
+ pad = ecdh(
+ seckey=deckey,
+ my_pubkey=enckey,
+ their_pubkey=pubnonce,
+ context=context_,
+ sending=False,
+ )
+ pads.append(pad)
+ return pads
+
+
+def decrypt_sum(
+ deckey: bytes,
+ enckey: bytes,
+ pubnonces: List[bytes],
+ context: bytes,
+ idx: int,
+ sum_ciphertexts: Scalar,
+) -> Scalar:
+ if idx >= len(pubnonces):
+ raise IndexError
+ pads = decaps_multi(deckey, enckey, pubnonces, context, idx)
+ sum_plaintexts: Scalar = sum_ciphertexts - Scalar.sum(*pads)
+ return sum_plaintexts
+
+
+###
+### Messages
+###
+
+
+class ParticipantMsg(NamedTuple):
+ simpl_pmsg: simplpedpop.ParticipantMsg
+ pubnonce: bytes
+ enc_shares: List[Scalar]
+
+
+class CoordinatorMsg(NamedTuple):
+ simpl_cmsg: simplpedpop.CoordinatorMsg
+ pubnonces: List[bytes]
+
+
+class CoordinatorInvestigationMsg(NamedTuple):
+ enc_partial_secshares: List[Scalar]
+ partial_pubshares: List[GE]
+
+
+###
+### Participant
+###
+
+
+class ParticipantState(NamedTuple):
+ simpl_state: simplpedpop.ParticipantState
+ pubnonce: bytes
+ enckeys: List[bytes]
+ idx: int
+
+
+class ParticipantInvestigationData(NamedTuple):
+ simpl_bstate: simplpedpop.ParticipantInvestigationData
+ enc_secshare: Scalar
+ pads: List[Scalar]
+
+
+def serialize_enc_context(t: int, enckeys: List[bytes]) -> bytes:
+ return t.to_bytes(4, byteorder="big") + b"".join(enckeys)
+
+
+def participant_step1(
+ seed: bytes,
+ deckey: bytes,
+ enckeys: List[bytes],
+ t: int,
+ idx: int,
+ random: bytes,
+) -> Tuple[ParticipantState, ParticipantMsg]:
+ assert t < 2 ** (4 * 8)
+ assert len(random) == 32
+ n = len(enckeys)
+
+ # Derive an encryption nonce and a seed for SimplPedPop.
+ #
+ # SimplPedPop will use its seed to derive the secret shares, which we will
+ # encrypt using the encryption nonce. That means that all entropy used in
+ # the derivation of simpl_seed should also be in the derivation of the
+ # pubnonce, to ensure that we never encrypt different secret shares with the
+ # same encryption pads. The foolproof way to achieve this is to simply
+ # derive the nonce from simpl_seed.
+ enc_context = serialize_enc_context(t, enckeys)
+ simpl_seed = tagged_hash_bip_dkg("encpedpop seed", seed + random + enc_context)
+ secnonce = tagged_hash_bip_dkg("encpedpop secnonce", simpl_seed)
+ pubnonce = pubkey_gen_plain(secnonce)
+
+ simpl_state, simpl_pmsg, shares = simplpedpop.participant_step1(
+ simpl_seed, t, n, idx
+ )
+ assert len(shares) == n
+
+ enc_shares = encrypt_multi(
+ secnonce, pubnonce, deckey, enckeys, enc_context, idx, shares
+ )
+
+ pmsg = ParticipantMsg(simpl_pmsg, pubnonce, enc_shares)
+ state = ParticipantState(simpl_state, pubnonce, enckeys, idx)
+ return state, pmsg
+
+
+def participant_step2(
+ state: ParticipantState,
+ deckey: bytes,
+ cmsg: CoordinatorMsg,
+ enc_secshare: Scalar,
+) -> Tuple[simplpedpop.DKGOutput, bytes]:
+ simpl_state, pubnonce, enckeys, idx = state
+ simpl_cmsg, pubnonces = cmsg
+
+ reported_pubnonce = pubnonces[idx]
+ if reported_pubnonce != pubnonce:
+ raise FaultyCoordinatorError("Coordinator replied with wrong pubnonce")
+
+ enc_context = serialize_enc_context(simpl_state.t, enckeys)
+ pads = decaps_multi(deckey, enckeys[idx], pubnonces, enc_context, idx)
+ secshare = enc_secshare - Scalar.sum(*pads)
+
+ try:
+ dkg_output, eq_input = simplpedpop.participant_step2(
+ simpl_state, simpl_cmsg, secshare
+ )
+ except UnknownFaultyParticipantOrCoordinatorError as e:
+ assert isinstance(e.inv_data, simplpedpop.ParticipantInvestigationData)
+ # Translate simplpedpop.ParticipantInvestigationData into our own
+ # encpedpop.ParticipantInvestigationData.
+ inv_data = ParticipantInvestigationData(e.inv_data, enc_secshare, pads)
+ raise UnknownFaultyParticipantOrCoordinatorError(inv_data, e.args) from e
+
+ eq_input += b"".join(enckeys) + b"".join(pubnonces)
+ return dkg_output, eq_input
+
+
+def participant_investigate(
+ error: UnknownFaultyParticipantOrCoordinatorError,
+ cinv: CoordinatorInvestigationMsg,
+) -> NoReturn:
+ simpl_inv_data, enc_secshare, pads = error.inv_data
+ enc_partial_secshares, partial_pubshares = cinv
+ assert len(enc_partial_secshares) == len(pads)
+ partial_secshares = [
+ enc_partial_secshare - pad
+ for enc_partial_secshare, pad in zip(enc_partial_secshares, pads)
+ ]
+
+ simpl_cinv = simplpedpop.CoordinatorInvestigationMsg(partial_pubshares)
+ try:
+ simplpedpop.participant_investigate(
+ UnknownFaultyParticipantOrCoordinatorError(simpl_inv_data),
+ simpl_cinv,
+ partial_secshares,
+ )
+ except simplpedpop.SecshareSumError as e:
+ # The secshare is not equal to the sum of the partial secshares in the
+ # investigation message. Since the encryption is additively homomorphic,
+ # this can only happen if the sum of the *encrypted* secshare is not
+ # equal to the sum of the encrypted partial sechares, which is the
+ # coordinator's fault.
+ assert Scalar.sum(*enc_partial_secshares) != enc_secshare
+ raise FaultyCoordinatorError(
+ "Sum of encrypted partial secshares not equal to encrypted secshare"
+ ) from e
+
+
+###
+### Coordinator
+###
+
+
+def coordinator_step(
+ pmsgs: List[ParticipantMsg],
+ t: int,
+ enckeys: List[bytes],
+) -> Tuple[CoordinatorMsg, simplpedpop.DKGOutput, bytes, List[Scalar]]:
+ n = len(enckeys)
+ if n != len(pmsgs):
+ raise ValueError
+
+ simpl_pmsgs = [pmsg.simpl_pmsg for pmsg in pmsgs]
+ simpl_cmsg, dkg_output, eq_input = simplpedpop.coordinator_step(simpl_pmsgs, t, n)
+ pubnonces = [pmsg.pubnonce for pmsg in pmsgs]
+ for i in range(n):
+ if len(pmsgs[i].enc_shares) != n:
+ raise FaultyParticipantOrCoordinatorError(
+ i, "Participant sent enc_shares with invalid length"
+ )
+ enc_secshares = [
+ Scalar.sum(*([pmsg.enc_shares[i] for pmsg in pmsgs])) for i in range(n)
+ ]
+ eq_input += b"".join(enckeys) + b"".join(pubnonces)
+
+ # In ChillDKG, the coordinator needs to broadcast the entire enc_secshares
+ # array to all participants. But in pure EncPedPop, the coordinator needs to
+ # send to each participant i only their entry enc_secshares[i].
+ #
+ # Since broadcasting the entire array is not necessary, we don't include it
+ # in encpedpop.CoordinatorMsg, but only return it as a side output, so that
+ # chilldkg.coordinator_step can pick it up. Implementations of pure
+ # EncPedPop will need to decide how to transmit enc_secshares[i] to
+ # participant i for participant_step2(); we leave this unspecified.
+ return (
+ CoordinatorMsg(simpl_cmsg, pubnonces),
+ dkg_output,
+ eq_input,
+ enc_secshares,
+ )
+
+
+def coordinator_investigate(
+ pmsgs: List[ParticipantMsg],
+) -> List[CoordinatorInvestigationMsg]:
+ n = len(pmsgs)
+ simpl_pmsgs = [pmsg.simpl_pmsg for pmsg in pmsgs]
+
+ all_enc_partial_secshares = [
+ [pmsg.enc_shares[i] for pmsg in pmsgs] for i in range(n)
+ ]
+ simpl_cinvs = simplpedpop.coordinator_investigate(simpl_pmsgs)
+ cinvs = [
+ CoordinatorInvestigationMsg(
+ all_enc_partial_secshares[i], simpl_cinvs[i].partial_pubshares
+ )
+ for i in range(n)
+ ]
+ return cinvs
diff --git a/src/jmfrost/chilldkg_ref/simplpedpop.py b/src/jmfrost/chilldkg_ref/simplpedpop.py
new file mode 100644
index 0000000..357c675
--- /dev/null
+++ b/src/jmfrost/chilldkg_ref/simplpedpop.py
@@ -0,0 +1,316 @@
+from secrets import token_bytes as random_bytes
+from typing import List, NamedTuple, NewType, Tuple, Optional, NoReturn
+
+from ..secp256k1proto.bip340 import schnorr_sign, schnorr_verify
+from ..secp256k1proto.secp256k1 import GE, Scalar
+from .util import (
+ BIP_TAG,
+ FaultyParticipantOrCoordinatorError,
+ FaultyCoordinatorError,
+ UnknownFaultyParticipantOrCoordinatorError,
+)
+from .vss import VSS, VSSCommitment
+
+
+###
+### Exceptions
+###
+
+
+class SecshareSumError(ValueError):
+ pass
+
+
+###
+### Proofs of possession (pops)
+###
+
+
+Pop = NewType("Pop", bytes)
+
+POP_MSG_TAG = BIP_TAG + "pop message"
+
+
+def pop_msg(idx: int) -> bytes:
+ return idx.to_bytes(4, byteorder="big")
+
+
+def pop_prove(seckey: bytes, idx: int, aux_rand: bytes = 32 * b"\x00") -> Pop:
+ sig = schnorr_sign(
+ pop_msg(idx), seckey, aux_rand=random_bytes(32), tag_prefix=POP_MSG_TAG
+ )
+ return Pop(sig)
+
+
+def pop_verify(pop: Pop, pubkey: bytes, idx: int) -> bool:
+ return schnorr_verify(pop_msg(idx), pubkey, pop, tag_prefix=POP_MSG_TAG)
+
+
+###
+### Messages
+###
+
+
+class ParticipantMsg(NamedTuple):
+ com: VSSCommitment
+ pop: Pop
+
+
+class CoordinatorMsg(NamedTuple):
+ coms_to_secrets: List[GE]
+ sum_coms_to_nonconst_terms: List[GE]
+ pops: List[Pop]
+
+ def to_bytes(self) -> bytes:
+ return b"".join(
+ [
+ P.to_bytes_compressed_with_infinity()
+ for P in self.coms_to_secrets + self.sum_coms_to_nonconst_terms
+ ]
+ ) + b"".join(self.pops)
+
+
+class CoordinatorInvestigationMsg(NamedTuple):
+ partial_pubshares: List[GE]
+
+
+###
+### Other common definitions
+###
+
+
+class DKGOutput(NamedTuple):
+ secshare: Optional[bytes] # None for coordinator
+ threshold_pubkey: bytes
+ pubshares: List[bytes]
+
+
+def assemble_sum_coms(
+ coms_to_secrets: List[GE], sum_coms_to_nonconst_terms: List[GE]
+) -> VSSCommitment:
+ # Sum the commitments to the secrets
+ return VSSCommitment(
+ [GE.sum(*(c for c in coms_to_secrets))] + sum_coms_to_nonconst_terms
+ )
+
+
+###
+### Participant
+###
+
+
+class ParticipantState(NamedTuple):
+ t: int
+ n: int
+ idx: int
+ com_to_secret: GE
+
+
+class ParticipantInvestigationData(NamedTuple):
+ n: int
+ idx: int
+ secshare: Scalar
+ pubshare: GE
+
+
+# To keep the algorithms of SimplPedPop and EncPedPop purely non-interactive
+# computations, we omit explicit invocations of an interactive equality check
+# protocol. ChillDKG will take care of invoking the equality check protocol.
+
+
+def participant_step1(
+ seed: bytes, t: int, n: int, idx: int
+) -> Tuple[
+ ParticipantState,
+ ParticipantMsg,
+ # The following return value is a list of n partial secret shares generated
+ # by this participant. The item at index i is supposed to be made available
+ # to participant i privately, e.g., via an external secure channel. See also
+ # the function participant_step2_prepare_secshare().
+ List[Scalar],
+]:
+ if t > n:
+ raise ValueError
+ if idx >= n:
+ raise IndexError
+ if len(seed) != 32:
+ raise ValueError
+
+ vss = VSS.generate(seed, t) # OverflowError if t >= 2**32
+ partial_secshares_from_me = vss.secshares(n)
+ pop = pop_prove(vss.secret().to_bytes(), idx)
+
+ com = vss.commit()
+ com_to_secret = com.commitment_to_secret()
+ msg = ParticipantMsg(com, pop)
+ state = ParticipantState(t, n, idx, com_to_secret)
+ return state, msg, partial_secshares_from_me
+
+
+# Helper function to prepare the secshare for participant idx's
+# participant_step2() by summing the partial_secshares returned by all
+# participants' participant_step1().
+#
+# In a pure run of SimplPedPop where secret shares are sent via external secure
+# channels (i.e., EncPedPop is not used), each participant needs to run this
+# function in preparation of their participant_step2(). Since this computation
+# involves secret data, it cannot be delegated to the coordinator as opposed to
+# other aggregation steps.
+#
+# If EncPedPop is used instead (as a wrapper of SimplPedPop), the coordinator
+# can securely aggregate the encrypted partial secshares into an encrypted
+# secshare by exploiting the additively homomorphic property of the encryption.
+def participant_step2_prepare_secshare(
+ partial_secshares: List[Scalar],
+) -> Scalar:
+ secshare: Scalar # REVIEW Work around missing type annotation of Scalar.sum
+ secshare = Scalar.sum(*partial_secshares)
+ return secshare
+
+
+def participant_step2(
+ state: ParticipantState,
+ cmsg: CoordinatorMsg,
+ secshare: Scalar,
+) -> Tuple[DKGOutput, bytes]:
+ t, n, idx, com_to_secret = state
+ coms_to_secrets, sum_coms_to_nonconst_terms, pops = cmsg
+
+ assert len(coms_to_secrets) == n
+ assert len(sum_coms_to_nonconst_terms) == t - 1
+ assert len(pops) == n
+
+ if coms_to_secrets[idx] != com_to_secret:
+ raise FaultyCoordinatorError(
+ "Coordinator sent unexpected first group element for local index"
+ )
+
+ for i in range(n):
+ if i == idx:
+ # No need to check our own pop.
+ continue
+ if coms_to_secrets[i].infinity:
+ raise FaultyParticipantOrCoordinatorError(
+ i, "Participant sent invalid commitment"
+ )
+ # This can be optimized: We serialize the coms_to_secrets[i] here, but
+ # schnorr_verify (inside pop_verify) will need to deserialize it again, which
+ # involves computing a square root to obtain the y coordinate.
+ if not pop_verify(pops[i], coms_to_secrets[i].to_bytes_xonly(), i):
+ raise FaultyParticipantOrCoordinatorError(
+ i, "Participant sent invalid proof-of-knowledge"
+ )
+
+ sum_coms = assemble_sum_coms(coms_to_secrets, sum_coms_to_nonconst_terms)
+ # Verifying the tweaked secshare against the tweaked pubshare is equivalent
+ # to verifying the untweaked secshare against the untweaked pubshare, but
+ # avoids computing the untweaked pubshare in the happy path and thereby
+ # moves a group addition to the error path.
+ sum_coms_tweaked, tweak, pubtweak = sum_coms.invalid_taproot_commit()
+ pubshare_tweaked = sum_coms_tweaked.pubshare(idx)
+ secshare_tweaked = secshare + tweak
+ if not VSSCommitment.verify_secshare(secshare_tweaked, pubshare_tweaked):
+ pubshare = pubshare_tweaked - pubtweak
+ raise UnknownFaultyParticipantOrCoordinatorError(
+ ParticipantInvestigationData(n, idx, secshare, pubshare),
+ "Received invalid secshare, "
+ "consider investigation procedure to determine faulty party",
+ )
+
+ threshold_pubkey = sum_coms_tweaked.commitment_to_secret()
+ pubshares = [
+ sum_coms_tweaked.pubshare(i) if i != idx else pubshare_tweaked for i in range(n)
+ ]
+ dkg_output = DKGOutput(
+ secshare_tweaked.to_bytes(),
+ threshold_pubkey.to_bytes_compressed(),
+ [pubshare.to_bytes_compressed() for pubshare in pubshares],
+ )
+ eq_input = t.to_bytes(4, byteorder="big") + sum_coms.to_bytes()
+ return dkg_output, eq_input
+
+
+def participant_investigate(
+ error: UnknownFaultyParticipantOrCoordinatorError,
+ cinv: CoordinatorInvestigationMsg,
+ partial_secshares: List[Scalar],
+) -> NoReturn:
+ n, idx, secshare, pubshare = error.inv_data
+ partial_pubshares = cinv.partial_pubshares
+
+ if GE.sum(*partial_pubshares) != pubshare:
+ raise FaultyCoordinatorError("Sum of partial pubshares not equal to pubshare")
+
+ if Scalar.sum(*partial_secshares) != secshare:
+ raise SecshareSumError("Sum of partial secshares not equal to secshare")
+
+ for i in range(n):
+ if not VSSCommitment.verify_secshare(
+ partial_secshares[i], partial_pubshares[i]
+ ):
+ if i != idx:
+ raise FaultyParticipantOrCoordinatorError(
+ i, "Participant sent invalid partial secshare"
+ )
+ else:
+ # We are not faulty, so the coordinator must be.
+ raise FaultyCoordinatorError(
+ "Coordinator fiddled with the share from me to myself"
+ )
+
+ # We now know:
+ # - The sum of the partial secshares is equal to the secshare.
+ # - The sum of the partial pubshares is equal to the pubshare.
+ # - Every partial secshare matches its corresponding partial pubshare.
+ # Hence, the secshare matches the pubshare.
+ assert VSSCommitment.verify_secshare(secshare, pubshare)
+
+ # This should never happen (unless the caller fiddled with the inputs).
+ raise RuntimeError(
+ "participant_investigate() was called, but all inputs are consistent."
+ )
+
+
+###
+### Coordinator
+###
+
+
+def coordinator_step(
+ pmsgs: List[ParticipantMsg], t: int, n: int
+) -> Tuple[CoordinatorMsg, DKGOutput, bytes]:
+ # Sum the commitments to the i-th coefficients for i > 0
+ #
+ # This procedure corresponds to the one described by Pedersen in Section 5.1
+ # of "Non-Interactive and Information-Theoretic Secure Verifiable Secret
+ # Sharing". However, we don't sum the commitments to the secrets (i == 0)
+ # because they'll be necessary to check the pops.
+ coms_to_secrets = [pmsg.com.commitment_to_secret() for pmsg in pmsgs]
+ # But we can sum the commitments to the non-constant terms.
+ sum_coms_to_nonconst_terms = [
+ GE.sum(*(pmsg.com.commitment_to_nonconst_terms()[j] for pmsg in pmsgs))
+ for j in range(t - 1)
+ ]
+ pops = [pmsg.pop for pmsg in pmsgs]
+ cmsg = CoordinatorMsg(coms_to_secrets, sum_coms_to_nonconst_terms, pops)
+
+ sum_coms = assemble_sum_coms(coms_to_secrets, sum_coms_to_nonconst_terms)
+ sum_coms_tweaked, _, _ = sum_coms.invalid_taproot_commit()
+ threshold_pubkey = sum_coms_tweaked.commitment_to_secret()
+ pubshares = [sum_coms_tweaked.pubshare(i) for i in range(n)]
+
+ dkg_output = DKGOutput(
+ None,
+ threshold_pubkey.to_bytes_compressed(),
+ [pubshare.to_bytes_compressed() for pubshare in pubshares],
+ )
+ eq_input = t.to_bytes(4, byteorder="big") + sum_coms.to_bytes()
+ return cmsg, dkg_output, eq_input
+
+
+def coordinator_investigate(
+ pmsgs: List[ParticipantMsg],
+) -> List[CoordinatorInvestigationMsg]:
+ n = len(pmsgs)
+ all_partial_pubshares = [[pmsg.com.pubshare(i) for pmsg in pmsgs] for i in range(n)]
+ return [CoordinatorInvestigationMsg(all_partial_pubshares[i]) for i in range(n)]
diff --git a/src/jmfrost/chilldkg_ref/util.py b/src/jmfrost/chilldkg_ref/util.py
new file mode 100644
index 0000000..10f6d78
--- /dev/null
+++ b/src/jmfrost/chilldkg_ref/util.py
@@ -0,0 +1,103 @@
+from typing import Any
+
+from ..secp256k1proto.util import tagged_hash
+
+
+BIP_TAG = "BIP DKG/"
+
+
+def tagged_hash_bip_dkg(tag: str, msg: bytes) -> bytes:
+ return tagged_hash(BIP_TAG + tag, msg)
+
+
+class ProtocolError(Exception):
+ """Base exception for errors caused by received protocol messages."""
+
+
+class FaultyParticipantError(ProtocolError):
+ """Raised if a participant is faulty.
+
+ This exception is raised by the coordinator code when it detects faulty
+ behavior by a participant, i.e., a participant has deviated from the
+ protocol. The index of the participant is provided as part of the exception.
+ Assuming protocol messages have been transmitted correctly and the
+ coordinator itself is not faulty, this exception implies that the
+ participant is indeed faulty.
+
+ This exception is raised only by the coordinator code. Some faulty behavior
+ by participants will be detected by the other participants instead.
+ See `FaultyParticipantOrCoordinatorError` for details.
+
+ Attributes:
+ participant (int): Index of the faulty participant.
+ """
+
+ def __init__(self, participant: int, *args: Any):
+ self.participant = participant
+ super().__init__(participant, *args)
+
+
+class FaultyParticipantOrCoordinatorError(ProtocolError):
+ """Raised if another known participant or the coordinator is faulty.
+
+ This exception is raised by the participant code when it detects what looks
+ like faulty behavior by a suspected participant. The index of the suspected
+ participant is provided as part of the exception.
+
+ Importantly, this exception is not proof that the suspected participant is
+ indeed faulty. It is instead possible that the coordinator has deviated from
+ the protocol in a way that makes it look as if the suspected participant has
+ deviated from the protocol. In other words, assuming messages have been
+ transmitted correctly and the raising participant is not faulty, this
+ exception implies that
+ - the suspected participant is faulty,
+ - *or* the coordinator is faulty (and has framed the suspected
+ participant).
+
+ This exception is raised only by the participant code. Some faulty behavior
+ by participants will be detected by the coordinator instead. See
+ `FaultyParticipantError` for details.
+
+ Attributes:
+ participant (int): Index of the suspected participant.
+ """
+
+ def __init__(self, participant: int, *args: Any):
+ self.participant = participant
+ super().__init__(participant, *args)
+
+
+class FaultyCoordinatorError(ProtocolError):
+ """Raised if the coordinator is faulty.
+
+ This exception is raised by the participant code when it detects faulty
+ behavior by the coordinator, i.e., the coordinator has deviated from the
+ protocol. Assuming protocol messages have been transmitted correctly and the
+ raising participant is not faulty, this exception implies that the
+ coordinator is indeed faulty.
+ """
+
+
+class UnknownFaultyParticipantOrCoordinatorError(ProtocolError):
+ """Raised if another unknown participant or the coordinator is faulty.
+
+ This exception is raised by the participant code when it detects what looks
+ like faulty behavior by some other participant, but there is insufficient
+ information to determine which participant should be suspected.
+
+ To determine a suspected participant, the raising participant may choose to
+ run the optional investigation procedure of the protocol, which requires
+ obtaining an investigation message from the coordinator. See the
+ `participant_investigate` function for details.
+
+ This is only raised for specific faulty behavior by another participant
+ which cannot be attributed to another participant without further help of
+ the coordinator (namely, sending invalid encrypted secret shares).
+
+ Attributes:
+ inv_data: Information required to perform the investigation.
+ """
+
+ def __init__(self, inv_data: Any, *args: Any):
+ self.inv_data = inv_data
+ super().__init__(*args)
diff --git a/src/jmfrost/chilldkg_ref/vss.py b/src/jmfrost/chilldkg_ref/vss.py
new file mode 100644
index 0000000..0412207
--- /dev/null
+++ b/src/jmfrost/chilldkg_ref/vss.py
@@ -0,0 +1,146 @@
+from __future__ import annotations
+
+from typing import List, Tuple
+
+from ..secp256k1proto.secp256k1 import GE, G, Scalar
+from ..secp256k1proto.util import tagged_hash
+
+from .util import tagged_hash_bip_dkg
+
+
+class Polynomial:
+ # A scalar polynomial.
+ #
+ # A polynomial f of degree at most t - 1 is represented by a list `coeffs`
+ # of t coefficients, i.e., f(x) = coeffs[0] + ... + coeffs[t-1] *
+ # x^(t-1)."""
+ coeffs: List[Scalar]
+
+ def __init__(self, coeffs: List[Scalar]) -> None:
+ self.coeffs = coeffs
+
+ def eval(self, x: Scalar) -> Scalar:
+ # Evaluate a polynomial at position x.
+
+ value = Scalar(0)
+ # Reverse coefficients to compute evaluation via Horner's method
+ for coeff in self.coeffs[::-1]:
+ value = value * x + coeff
+ return value
+
+ def __call__(self, x: Scalar) -> Scalar:
+ return self.eval(x)
+
+
+class VSSCommitment:
+ ges: List[GE]
+
+ def __init__(self, ges: List[GE]) -> None:
+ self.ges = ges
+
+ def t(self) -> int:
+ return len(self.ges)
+
+ def pubshare(self, i: int) -> GE:
+ pubshare: GE = GE.batch_mul(
+ *(((i + 1) ** j, self.ges[j]) for j in range(0, len(self.ges)))
+ )
+ return pubshare
+
+ @staticmethod
+ def verify_secshare(secshare: Scalar, pubshare: GE) -> bool:
+ # The caller needs to provide the correct pubshare(i)
+ actual = secshare * G
+ valid: bool = actual == pubshare
+ return valid
+
+ def to_bytes(self) -> bytes:
+ # Return commitments to the coefficients of f.
+ return b"".join([ge.to_bytes_compressed_with_infinity() for ge in self.ges])
+
+ def __add__(self, other: VSSCommitment) -> VSSCommitment:
+ assert self.t() == other.t()
+ return VSSCommitment([self.ges[i] + other.ges[i] for i in range(self.t())])
+
+ @staticmethod
+ def from_bytes_and_t(b: bytes, t: int) -> VSSCommitment:
+ if len(b) != 33 * t:
+ raise ValueError
+ ges = [GE.from_bytes_compressed(b[i : i + 33]) for i in range(0, 33 * t, 33)]
+ return VSSCommitment(ges)
+
+ def commitment_to_secret(self) -> GE:
+ return self.ges[0]
+
+ def commitment_to_nonconst_terms(self) -> List[GE]:
+ return self.ges[1 : self.t()]
+
+ def invalid_taproot_commit(self) -> Tuple[VSSCommitment, Scalar, GE]:
+ # Return a modified VSS commitment such that the threshold public key
+ # generated from it has an unspendable BIP 341 Taproot script path.
+ #
+ # Specifically, for a VSS commitment `com`, we have:
+ # `com.invalid_taproot_commit().commitment_to_secret() = com.commitment_to_secret() + t*G`.
+ #
+ # The tweak `t` commits to an empty message, which is invalid according
+ # to BIP 341 for Taproot script spends. This follows BIP 341's
+ # recommended approach for committing to an unspendable script path.
+ #
+ # This prevents a malicious participant from secretly inserting a *valid*
+ # Taproot commitment to a script path into the summed VSS commitment during
+ # the DKG protocol. If the resulting threshold public key was used directly
+ # in a BIP 341 Taproot output, the malicious participant would be able to
+ # spend the output using their hidden script path.
+ #
+ # The function returns the updated VSS commitment and the tweak `t` which
+ # must be added to all secret shares of the commitment.
+ pk = self.commitment_to_secret()
+ secshare_tweak = Scalar.from_bytes(
+ tagged_hash("TapTweak", pk.to_bytes_compressed())
+ )
+ pubshare_tweak = secshare_tweak * G
+ vss_tweak = VSSCommitment([pubshare_tweak] + [GE()] * (self.t() - 1))
+ return (self + vss_tweak, secshare_tweak, pubshare_tweak)
+
+
+class VSS:
+ f: Polynomial
+
+ def __init__(self, f: Polynomial) -> None:
+ self.f = f
+
+ @staticmethod
+ def generate(seed: bytes, t: int) -> VSS:
+ coeffs = [
+ Scalar.from_bytes(
+ tagged_hash_bip_dkg("vss coeffs", seed + i.to_bytes(4, byteorder="big"))
+ )
+ for i in range(t)
+ ]
+ return VSS(Polynomial(coeffs))
+
+ def secshare_for(self, i: int) -> Scalar:
+ # Return the secret share for the participant with index i.
+ #
+ # This computes f(i+1).
+ if i < 0:
+ raise ValueError(f"Invalid participant index: {i}")
+ x = Scalar(i + 1)
+ # Ensure we don't compute f(0), which is the secret.
+ assert x != Scalar(0)
+ return self.f(x)
+
+ def secshares(self, n: int) -> List[Scalar]:
+ # Return the secret shares for the participants with indices 0..n-1.
+ #
+ # This computes [f(1), ..., f(n)].
+ return [self.secshare_for(i) for i in range(0, n)]
+
+ def commit(self) -> VSSCommitment:
+ return VSSCommitment([c * G for c in self.f.coeffs])
+
+ def secret(self) -> Scalar:
+ # Return the secret to be shared.
+ #
+ # This computes f(0).
+ return self.f.coeffs[0]
diff --git a/src/jmfrost/frost_ref/__init__.py b/src/jmfrost/frost_ref/__init__.py
new file mode 100644
index 0000000..40a96af
--- /dev/null
+++ b/src/jmfrost/frost_ref/__init__.py
@@ -0,0 +1 @@
+# -*- coding: utf-8 -*-
diff --git a/src/jmfrost/frost_ref/reference.py b/src/jmfrost/frost_ref/reference.py
new file mode 100644
index 0000000..b60f3d4
--- /dev/null
+++ b/src/jmfrost/frost_ref/reference.py
@@ -0,0 +1,450 @@
+# -*- coding: utf-8 -*-
+
+# BIP FROST Signing reference implementation
+#
+# It's worth noting that many functions, types, and exceptions were directly
+# copied or modified from the MuSig2 (BIP 327) reference code, found at:
+# https://github.com/bitcoin/bips/blob/master/bip-0327/reference.py
+#
+# WARNING: This implementation is for demonstration purposes only and _not_ to
+# be used in production environments. The code is vulnerable to timing attacks,
+# for example.
+
+from typing import Any, List, Optional, Tuple, NewType, NamedTuple
+import itertools
+import secrets
+import time
+
+from .utils.bip340 import *
+
+PlainPk = NewType('PlainPk', bytes)
+XonlyPk = NewType('XonlyPk', bytes)
+
+# There are two types of exceptions that can be raised by this implementation:
+# - ValueError for indicating that an input doesn't conform to some function
+# precondition (e.g. an input array is the wrong length, a serialized
+# representation doesn't have the correct format).
+# - InvalidContributionError for indicating that a signer (or the
+# aggregator) is misbehaving in the protocol.
+#
+# Assertions are used to (1) satisfy the type-checking system, and (2) check for
+# inconvenient events that can't happen except with negligible probability (e.g.
+# output of a hash function is 0) and can't be manually triggered by any
+# signer.
+
+# This exception is raised if a party (signer or nonce aggregator) sends invalid
+# values. Actual implementations should not crash when receiving invalid
+# contributions. Instead, they should hold the offending party accountable.
+class InvalidContributionError(Exception):
+ def __init__(self, signer_id, contrib):
+ # participant identifier of the signer who sent the invalid value
+ self.id = signer_id
+ # contrib is one of "pubkey", "pubnonce", "aggnonce", or "psig".
+ self.contrib = contrib
+
+infinity = None
+
+def xbytes(P: Point) -> bytes:
+ return bytes_from_int(x(P))
+
+def cbytes(P: Point) -> bytes:
+ a = b'\x02' if has_even_y(P) else b'\x03'
+ return a + xbytes(P)
+
+def cbytes_ext(P: Optional[Point]) -> bytes:
+ if is_infinite(P):
+ return (0).to_bytes(33, byteorder='big')
+ assert P is not None
+ return cbytes(P)
+
+def point_negate(P: Optional[Point]) -> Optional[Point]:
+ if P is None:
+ return P
+ return (x(P), p - y(P))
+
+def cpoint(x: bytes) -> Point:
+ if len(x) != 33:
+ raise ValueError('x is not a valid compressed point.')
+ P = lift_x(x[1:33])
+ if P is None:
+ raise ValueError('x is not a valid compressed point.')
+ if x[0] == 2:
+ return P
+ elif x[0] == 3:
+ P = point_negate(P)
+ assert P is not None
+ return P
+ else:
+ raise ValueError('x is not a valid compressed point.')
+
+def cpoint_ext(x: bytes) -> Optional[Point]:
+ if x == (0).to_bytes(33, 'big'):
+ return None
+ else:
+ return cpoint(x)
+
+def int_ids(lst: List[bytes]) -> List[int]:
+ res = []
+ for x in lst:
+ id_ = int_from_bytes(x)
+ #todo: add check for < max_participants?
+ if not 1 <= id_ < n:
+ raise ValueError('x is not a valid participant identifier.')
+ res.append(id_)
+ return res
+
+# Return the plain public key corresponding to a given secret key
+def individual_pk(seckey: bytes) -> PlainPk:
+ d0 = int_from_bytes(seckey)
+ if not (1 <= d0 <= n - 1):
+ raise ValueError('The secret key must be an integer in the range 1..n-1.')
+ P = point_mul(G, d0)
+ assert P is not None
+ return PlainPk(cbytes(P))
+
+def derive_interpolating_value_internal(L: List[int], x_i: int) -> int:
+ num, deno = 1, 1
+ for x_j in L:
+ if x_j == x_i:
+ continue
+ num *= x_j
+ deno *= (x_j - x_i)
+ return num * pow(deno, n - 2, n) % n
+
+def derive_interpolating_value(ids: List[bytes], my_id: bytes) -> int:
+ if not my_id in ids:
+ raise ValueError('The signer\'s id must be present in the participant identifier list.')
+ if not all(ids.count(my_id) <= 1 for my_id in ids):
+ raise ValueError('The participant identifier list must contain unique elements.')
+ #todo: turn this into raise ValueError?
+ assert 1 <= int_from_bytes(my_id) < n
+ integer_ids = int_ids(ids)
+ return derive_interpolating_value_internal(integer_ids, int_from_bytes(my_id))
+
+def check_pubshares_correctness(secshares: List[bytes], pubshares: List[PlainPk]) -> bool:
+ assert len(secshares) == len(pubshares)
+ for secshare, pubshare in zip(secshares, pubshares):
+ if not individual_pk(secshare) == pubshare:
+ return False
+ return True
+
+def check_group_pubkey_correctness(min_participants: int, group_pk: PlainPk, ids: List[bytes], pubshares: List[PlainPk]) -> bool:
+ assert len(ids) == len(pubshares)
+ assert len(ids) >= min_participants
+
+ max_participants = len(ids)
+ # loop through all possible number of signers
+ for signer_count in range(min_participants, max_participants + 1):
+ # loop through all possible signer sets with length `signer_count`
+ for signer_set in itertools.combinations(zip(ids, pubshares), signer_count):
+ signer_ids = [pid for pid, pubshare in signer_set]
+ signer_pubshares = [pubshare for pid, pubshare in signer_set]
+ expected_pk = derive_group_pubkey(signer_pubshares, signer_ids)
+ if expected_pk != group_pk:
+ return False
+ return True
+
+def check_frost_key_compatibility(max_participants: int, min_participants: int, group_pk: PlainPk, ids: List[bytes], secshares: List[bytes], pubshares: List[PlainPk]) -> bool:
+ if not max_participants >= min_participants > 1:
+ return False
+ if not len(ids) == len(secshares) == len(pubshares) == max_participants:
+ return False
+ pubshare_check = check_pubshares_correctness(secshares, pubshares)
+ group_pk_check = check_group_pubkey_correctness(min_participants, group_pk, ids, pubshares)
+ return pubshare_check and group_pk_check
+
+TweakContext = NamedTuple('TweakContext', [('Q', Point),
+ ('gacc', int),
+ ('tacc', int)])
+AGGREGATOR_ID = b'aggregator'
+
+def get_xonly_pk(tweak_ctx: TweakContext) -> XonlyPk:
+ Q, _, _ = tweak_ctx
+ return XonlyPk(xbytes(Q))
+
+def get_plain_pk(tweak_ctx: TweakContext) -> PlainPk:
+ Q, _, _ = tweak_ctx
+ return PlainPk(cbytes(Q))
+
+#nit: switch the args ordering
+def derive_group_pubkey(pubshares: List[PlainPk], ids: List[bytes]) -> PlainPk:
+ assert len(pubshares) == len(ids)
+ assert AGGREGATOR_ID not in ids
+ Q = infinity
+ for my_id, pubshare in zip(ids, pubshares):
+ try:
+ X_i = cpoint(pubshare)
+ except ValueError:
+ raise InvalidContributionError(int_from_bytes(my_id), "pubshare")
+ lam_i = derive_interpolating_value(ids, my_id)
+ Q = point_add(Q, point_mul(X_i, lam_i))
+ # Q is not the point at infinity except with negligible probability.
+ assert(Q is not infinity)
+ return PlainPk(cbytes(Q))
+
+def tweak_ctx_init(pubshares: List[PlainPk], ids: List[bytes]) -> TweakContext:
+ group_pk = derive_group_pubkey(pubshares, ids)
+ Q = cpoint(group_pk)
+ gacc = 1
+ tacc = 0
+ return TweakContext(Q, gacc, tacc)
+
+def apply_tweak(tweak_ctx: TweakContext, tweak: bytes, is_xonly: bool) -> TweakContext:
+ if len(tweak) != 32:
+ raise ValueError('The tweak must be a 32-byte array.')
+ Q, gacc, tacc = tweak_ctx
+ if is_xonly and not has_even_y(Q):
+ g = n - 1
+ else:
+ g = 1
+ t = int_from_bytes(tweak)
+ if t >= n:
+ raise ValueError('The tweak must be less than n.')
+ Q_ = point_add(point_mul(Q, g), point_mul(G, t))
+ if Q_ is None:
+ raise ValueError('The result of tweaking cannot be infinity.')
+ gacc_ = g * gacc % n
+ tacc_ = (t + g * tacc) % n
+ return TweakContext(Q_, gacc_, tacc_)
+
+def bytes_xor(a: bytes, b: bytes) -> bytes:
+ return bytes(x ^ y for x, y in zip(a, b))
+
+def nonce_hash(rand: bytes, pubshare: PlainPk, group_pk: XonlyPk, i: int, msg_prefixed: bytes, extra_in: bytes) -> int:
+ buf = b''
+ buf += rand
+ buf += len(pubshare).to_bytes(1, 'big')
+ buf += pubshare
+ buf += len(group_pk).to_bytes(1, 'big')
+ buf += group_pk
+ buf += msg_prefixed
+ buf += len(extra_in).to_bytes(4, 'big')
+ buf += extra_in
+ buf += i.to_bytes(1, 'big')
+ return int_from_bytes(tagged_hash('FROST/nonce', buf))
+
+def nonce_gen_internal(rand_: bytes, secshare: Optional[bytes], pubshare: Optional[PlainPk], group_pk: Optional[XonlyPk], msg: Optional[bytes], extra_in: Optional[bytes]) -> Tuple[bytearray, bytes]:
+ if secshare is not None:
+ rand = bytes_xor(secshare, tagged_hash('FROST/aux', rand_))
+ else:
+ rand = rand_
+ if pubshare is None:
+ pubshare = PlainPk(b'')
+ if group_pk is None:
+ group_pk = XonlyPk(b'')
+ if msg is None:
+ msg_prefixed = b'\x00'
+ else:
+ msg_prefixed = b'\x01'
+ msg_prefixed += len(msg).to_bytes(8, 'big')
+ msg_prefixed += msg
+ if extra_in is None:
+ extra_in = b''
+ k_1 = nonce_hash(rand, pubshare, group_pk, 0, msg_prefixed, extra_in) % n
+ k_2 = nonce_hash(rand, pubshare, group_pk, 1, msg_prefixed, extra_in) % n
+ # k_1 == 0 or k_2 == 0 cannot occur except with negligible probability.
+ assert k_1 != 0
+ assert k_2 != 0
+ R_s1 = point_mul(G, k_1)
+ R_s2 = point_mul(G, k_2)
+ assert R_s1 is not None
+ assert R_s2 is not None
+ pubnonce = cbytes(R_s1) + cbytes(R_s2)
+ # use mutable `bytearray` since secnonce need to be replaced with zeros during signing.
+ secnonce = bytearray(bytes_from_int(k_1) + bytes_from_int(k_2))
+ return secnonce, pubnonce
+
+#think: can msg & extra_in be of any length here?
+#think: why doesn't musig2 ref code check for `pk` length here?
+def nonce_gen(secshare: Optional[bytes], pubshare: Optional[PlainPk], group_pk: Optional[XonlyPk], msg: Optional[bytes], extra_in: Optional[bytes]) -> Tuple[bytearray, bytes]:
+ if secshare is not None and len(secshare) != 32:
+ raise ValueError('The optional byte array secshare must have length 32.')
+ if pubshare is not None and len(pubshare) != 33:
+ raise ValueError('The optional byte array pubshare must have length 33.')
+ if group_pk is not None and len(group_pk) != 32:
+ raise ValueError('The optional byte array group_pk must have length 32.')
+ # bench: will adding individual_pk(secshare) == pubshare check, increase the execution time significantly?
+ rand_ = secrets.token_bytes(32)
+ return nonce_gen_internal(rand_, secshare, pubshare, group_pk, msg, extra_in)
+
+def nonce_agg(pubnonces: List[bytes], ids: List[bytes]) -> bytes:
+ if len(pubnonces) != len(ids):
+ raise ValueError('The pubnonces and ids arrays must have the same length.')
+ aggnonce = b''
+ for j in (1, 2):
+ R_j = infinity
+ for my_id_, pubnonce in zip(ids, pubnonces):
+ try:
+ R_ij = cpoint(pubnonce[(j-1)*33:j*33])
+ except ValueError:
+ my_id = int_from_bytes(my_id_) if my_id_ != AGGREGATOR_ID else my_id_
+ raise InvalidContributionError(my_id, "pubnonce")
+ R_j = point_add(R_j, R_ij)
+ aggnonce += cbytes_ext(R_j)
+ return aggnonce
+
+SessionContext = NamedTuple('SessionContext', [('aggnonce', bytes),
+ ('identifiers', List[bytes]),
+ ('pubshares', List[PlainPk]),
+ ('tweaks', List[bytes]),
+ ('is_xonly', List[bool]),
+ ('msg', bytes)])
+
+def group_pubkey_and_tweak(pubshares: List[PlainPk], ids: List[bytes], tweaks: List[bytes], is_xonly: List[bool]) -> TweakContext:
+ if len(pubshares) != len(ids):
+ raise ValueError('The pubshares and ids arrays must have the same length.')
+ if len(tweaks) != len(is_xonly):
+ raise ValueError('The tweaks and is_xonly arrays must have the same length.')
+ tweak_ctx = tweak_ctx_init(pubshares, ids)
+ v = len(tweaks)
+ for i in range(v):
+ tweak_ctx = apply_tweak(tweak_ctx, tweaks[i], is_xonly[i])
+ return tweak_ctx
+
+def get_session_values(session_ctx: SessionContext) -> Tuple[Point, int, int, int, Point, int]:
+ (aggnonce, ids, pubshares, tweaks, is_xonly, msg) = session_ctx
+ Q, gacc, tacc = group_pubkey_and_tweak(pubshares, ids, tweaks, is_xonly)
+ # sort the ids before serializing because ROAST paper considers them as a set
+ concat_ids = b''.join(sorted(ids))
+ b = int_from_bytes(tagged_hash('FROST/noncecoef', concat_ids + aggnonce + xbytes(Q) + msg)) % n
+ try:
+ R_1 = cpoint_ext(aggnonce[0:33])
+ R_2 = cpoint_ext(aggnonce[33:66])
+ except ValueError:
+ # Nonce aggregator sent invalid nonces
+ raise InvalidContributionError(None, "aggnonce")
+ R_ = point_add(R_1, point_mul(R_2, b))
+ R = R_ if not is_infinite(R_) else G
+ assert R is not None
+ e = int_from_bytes(tagged_hash('BIP0340/challenge', xbytes(R) + xbytes(Q) + msg)) % n
+ return (Q, gacc, tacc, b, R, e)
+
+def get_session_interpolating_value(session_ctx: SessionContext, my_id: bytes) -> int:
+ (_, ids, _, _, _, _) = session_ctx
+ return derive_interpolating_value(ids, my_id)
+
+def session_has_signer_pubshare(session_ctx: SessionContext, pubshare: bytes) -> bool:
+ (_, _, pubshares_list, _, _, _) = session_ctx
+ return pubshare in pubshares_list
+
+def sign(secnonce: bytearray, secshare: bytes, my_id: bytes, session_ctx: SessionContext) -> bytes:
+ # do we really need the below check?
+ # add test vector for this check if confirmed
+ if not 0 < int_from_bytes(my_id) < n:
+ raise ValueError('The signer\'s participant identifier is out of range')
+ (Q, gacc, _, b, R, e) = get_session_values(session_ctx)
+ k_1_ = int_from_bytes(secnonce[0:32])
+ k_2_ = int_from_bytes(secnonce[32:64])
+ # Overwrite the secnonce argument with zeros such that subsequent calls of
+ # sign with the same secnonce raise a ValueError.
+ secnonce[:] = bytearray(b'\x00'*64)
+ if not 0 < k_1_ < n:
+ raise ValueError('first secnonce value is out of range.')
+ if not 0 < k_2_ < n:
+ raise ValueError('second secnonce value is out of range.')
+ k_1 = k_1_ if has_even_y(R) else n - k_1_
+ k_2 = k_2_ if has_even_y(R) else n - k_2_
+ d_ = int_from_bytes(secshare)
+ if not 0 < d_ < n:
+ raise ValueError('The signer\'s secret share value is out of range.')
+ P = point_mul(G, d_)
+ assert P is not None
+ pubshare = cbytes(P)
+ if not session_has_signer_pubshare(session_ctx, pubshare):
+ raise ValueError('The signer\'s pubshare must be included in the list of pubshares.')
+ a = get_session_interpolating_value(session_ctx, my_id)
+ g = 1 if has_even_y(Q) else n - 1
+ d = g * gacc * d_ % n
+ s = (k_1 + b * k_2 + e * a * d) % n
+ psig = bytes_from_int(s)
+ R_s1 = point_mul(G, k_1_)
+ R_s2 = point_mul(G, k_2_)
+ assert R_s1 is not None
+ assert R_s2 is not None
+ pubnonce = cbytes(R_s1) + cbytes(R_s2)
+ # Optional correctness check. The result of signing should pass signature verification.
+ assert partial_sig_verify_internal(psig, my_id, pubnonce, pubshare, session_ctx)
+ return psig
+
+#todo: should we hash the signer set (or pubshares) too? Otherwise same nonce will be generate even if the signer set changes
+def det_nonce_hash(secshare_: bytes, aggothernonce: bytes, tweaked_gpk: bytes, msg: bytes, i: int) -> int:
+ buf = b''
+ buf += secshare_
+ buf += aggothernonce
+ buf += tweaked_gpk
+ buf += len(msg).to_bytes(8, 'big')
+ buf += msg
+ buf += i.to_bytes(1, 'big')
+ return int_from_bytes(tagged_hash('FROST/deterministic/nonce', buf))
+
+def deterministic_sign(secshare: bytes, my_id: bytes, aggothernonce: bytes, ids: List[bytes], pubshares: List[PlainPk], tweaks: List[bytes], is_xonly: List[bool], msg: bytes, rand: Optional[bytes]) -> Tuple[bytes, bytes]:
+ if rand is not None:
+ secshare_ = bytes_xor(secshare, tagged_hash('FROST/aux', rand))
+ else:
+ secshare_ = secshare
+
+ tweaked_gpk = get_xonly_pk(group_pubkey_and_tweak(pubshares, ids, tweaks, is_xonly))
+
+ k_1 = det_nonce_hash(secshare_, aggothernonce, tweaked_gpk, msg, 0) % n
+ k_2 = det_nonce_hash(secshare_, aggothernonce, tweaked_gpk, msg, 1) % n
+ # k_1 == 0 or k_2 == 0 cannot occur except with negligible probability.
+ assert k_1 != 0
+ assert k_2 != 0
+
+ R_s1 = point_mul(G, k_1)
+ R_s2 = point_mul(G, k_2)
+ assert R_s1 is not None
+ assert R_s2 is not None
+ pubnonce = cbytes(R_s1) + cbytes(R_s2)
+ secnonce = bytearray(bytes_from_int(k_1) + bytes_from_int(k_2))
+ try:
+ aggnonce = nonce_agg([pubnonce, aggothernonce], [my_id, AGGREGATOR_ID])
+ except Exception:
+ raise InvalidContributionError(None, "aggothernonce")
+ session_ctx = SessionContext(aggnonce, ids, pubshares, tweaks, is_xonly, msg)
+ psig = sign(secnonce, secshare, my_id, session_ctx)
+ return (pubnonce, psig)
+
+def partial_sig_verify(psig: bytes, ids: List[bytes], pubnonces: List[bytes], pubshares: List[PlainPk], tweaks: List[bytes], is_xonly: List[bool], msg: bytes, i: int) -> bool:
+ if not len(ids) == len(pubnonces) == len(pubshares):
+ raise ValueError('The ids, pubnonces and pubshares arrays must have the same length.')
+ if len(tweaks) != len(is_xonly):
+ raise ValueError('The tweaks and is_xonly arrays must have the same length.')
+ aggnonce = nonce_agg(pubnonces, ids)
+ session_ctx = SessionContext(aggnonce, ids, pubshares, tweaks, is_xonly, msg)
+ return partial_sig_verify_internal(psig, ids[i], pubnonces[i], pubshares[i], session_ctx)
+
+#todo: catch `cpoint`` ValueError and return false
+def partial_sig_verify_internal(psig: bytes, my_id: bytes, pubnonce: bytes, pubshare: bytes, session_ctx: SessionContext) -> bool:
+ (Q, gacc, _, b, R, e) = get_session_values(session_ctx)
+ s = int_from_bytes(psig)
+ if s >= n:
+ return False
+ if not session_has_signer_pubshare(session_ctx, pubshare):
+ return False
+ R_s1 = cpoint(pubnonce[0:33])
+ R_s2 = cpoint(pubnonce[33:66])
+ Re_s_ = point_add(R_s1, point_mul(R_s2, b))
+ Re_s = Re_s_ if has_even_y(R) else point_negate(Re_s_)
+ P = cpoint(pubshare)
+ if P is None:
+ return False
+ a = get_session_interpolating_value(session_ctx, my_id)
+ g = 1 if has_even_y(Q) else n - 1
+ g_ = g * gacc % n
+ return point_mul(G, s) == point_add(Re_s, point_mul(P, e * a * g_ % n))
+
+def partial_sig_agg(psigs: List[bytes], ids: List[bytes], session_ctx: SessionContext) -> bytes:
+ assert AGGREGATOR_ID not in ids
+ if len(psigs) != len(ids):
+ raise ValueError('The psigs and ids arrays must have the same length.')
+ (Q, _, tacc, _, R, e) = get_session_values(session_ctx)
+ s = 0
+ for my_id, psig in zip(ids, psigs):
+ s_i = int_from_bytes(psig)
+ if s_i >= n:
+ raise InvalidContributionError(int_from_bytes(my_id), "psig")
+ s = (s + s_i) % n
+ g = 1 if has_even_y(Q) else n - 1
+ s = (s + e * g * tacc) % n
+ return xbytes(R) + bytes_from_int(s)
diff --git a/src/jmfrost/frost_ref/utils/__init__.py b/src/jmfrost/frost_ref/utils/__init__.py
new file mode 100644
index 0000000..40a96af
--- /dev/null
+++ b/src/jmfrost/frost_ref/utils/__init__.py
@@ -0,0 +1 @@
+# -*- coding: utf-8 -*-
diff --git a/src/jmfrost/frost_ref/utils/bip340.py b/src/jmfrost/frost_ref/utils/bip340.py
new file mode 100644
index 0000000..00dd638
--- /dev/null
+++ b/src/jmfrost/frost_ref/utils/bip340.py
@@ -0,0 +1,93 @@
+#
+# The following helper functions were copied from the BIP-340 reference implementation:
+# https://github.com/bitcoin/bips/blob/master/bip-0340/reference.py
+#
+
+from typing import Tuple, Optional
+import hashlib
+
+p = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC2F
+n = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141
+
+# Points are tuples of X and Y coordinates and the point at infinity is
+# represented by the None keyword.
+G = (0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798, 0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8)
+
+Point = Tuple[int, int]
+
+# This implementation can be sped up by storing the midstate after hashing
+# tag_hash instead of rehashing it all the time.
+def tagged_hash(tag: str, msg: bytes) -> bytes:
+ tag_hash = hashlib.sha256(tag.encode()).digest()
+ return hashlib.sha256(tag_hash + tag_hash + msg).digest()
+
+def is_infinite(P: Optional[Point]) -> bool:
+ return P is None
+
+def x(P: Point) -> int:
+ assert not is_infinite(P)
+ return P[0]
+
+def y(P: Point) -> int:
+ assert not is_infinite(P)
+ return P[1]
+
+def point_add(P1: Optional[Point], P2: Optional[Point]) -> Optional[Point]:
+ if P1 is None:
+ return P2
+ if P2 is None:
+ return P1
+ if (x(P1) == x(P2)) and (y(P1) != y(P2)):
+ return None
+ if P1 == P2:
+ lam = (3 * x(P1) * x(P1) * pow(2 * y(P1), p - 2, p)) % p
+ else:
+ lam = ((y(P2) - y(P1)) * pow(x(P2) - x(P1), p - 2, p)) % p
+ x3 = (lam * lam - x(P1) - x(P2)) % p
+ return (x3, (lam * (x(P1) - x3) - y(P1)) % p)
+
+def point_mul(P: Optional[Point], n: int) -> Optional[Point]:
+ R = None
+ for i in range(256):
+ if (n >> i) & 1:
+ R = point_add(R, P)
+ P = point_add(P, P)
+ return R
+
+def bytes_from_int(x: int) -> bytes:
+ return x.to_bytes(32, byteorder="big")
+
+def lift_x(b: bytes) -> Optional[Point]:
+ x = int_from_bytes(b)
+ if x >= p:
+ return None
+ y_sq = (pow(x, 3, p) + 7) % p
+ y = pow(y_sq, (p + 1) // 4, p)
+ if pow(y, 2, p) != y_sq:
+ return None
+ return (x, y if y & 1 == 0 else p-y)
+
+def int_from_bytes(b: bytes) -> int:
+ return int.from_bytes(b, byteorder="big")
+
+def has_even_y(P: Point) -> bool:
+ assert not is_infinite(P)
+ return y(P) % 2 == 0
+
+def schnorr_verify(msg: bytes, pubkey: bytes, sig: bytes) -> bool:
+ if len(msg) != 32:
+ raise ValueError('The message must be a 32-byte array.')
+ if len(pubkey) != 32:
+ raise ValueError('The public key must be a 32-byte array.')
+ if len(sig) != 64:
+ raise ValueError('The signature must be a 64-byte array.')
+ P = lift_x(pubkey)
+ r = int_from_bytes(sig[0:32])
+ s = int_from_bytes(sig[32:64])
+ if (P is None) or (r >= p) or (s >= n):
+ return False
+ e = int_from_bytes(tagged_hash("BIP0340/challenge", sig[0:32] + pubkey + msg)) % n
+ R = point_add(point_mul(G, s), point_mul(P, n - e))
+ if (R is None) or (not has_even_y(R)) or (x(R) != r):
+ return False
+ return True
\ No newline at end of file
diff --git a/src/jmfrost/secp256k1proto/COPYING b/src/jmfrost/secp256k1proto/COPYING
new file mode 100644
index 0000000..e6d6e9f
--- /dev/null
+++ b/src/jmfrost/secp256k1proto/COPYING
@@ -0,0 +1,22 @@
+The MIT License (MIT)
+
+Copyright (c) 2009-2024 The Bitcoin Core developers
+Copyright (c) 2009-2024 Bitcoin Developers
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/src/jmfrost/secp256k1proto/__init__.py b/src/jmfrost/secp256k1proto/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/jmfrost/secp256k1proto/bip340.py b/src/jmfrost/secp256k1proto/bip340.py
new file mode 100644
index 0000000..2261ffa
--- /dev/null
+++ b/src/jmfrost/secp256k1proto/bip340.py
@@ -0,0 +1,73 @@
+# The following functions are based on the BIP 340 reference implementation:
+# https://github.com/bitcoin/bips/blob/master/bip-0340/reference.py
+
+from .secp256k1 import FE, GE, G
+from .util import int_from_bytes, bytes_from_int, xor_bytes, tagged_hash
+
+
+def pubkey_gen(seckey: bytes) -> bytes:
+ d0 = int_from_bytes(seckey)
+ if not (1 <= d0 <= GE.ORDER - 1):
+ raise ValueError("The secret key must be an integer in the range 1..n-1.")
+ P = d0 * G
+ assert not P.infinity
+ return P.to_bytes_xonly()
+
+
+def schnorr_sign(
+ msg: bytes, seckey: bytes, aux_rand: bytes, tag_prefix: str = "BIP0340"
+) -> bytes:
+ d0 = int_from_bytes(seckey)
+ if not (1 <= d0 <= GE.ORDER - 1):
+ raise ValueError("The secret key must be an integer in the range 1..n-1.")
+ if len(aux_rand) != 32:
+ raise ValueError("aux_rand must be 32 bytes instead of %i." % len(aux_rand))
+ P = d0 * G
+ assert not P.infinity
+ d = d0 if P.has_even_y() else GE.ORDER - d0
+ t = xor_bytes(bytes_from_int(d), tagged_hash(tag_prefix + "/aux", aux_rand))
+ k0 = (
+ int_from_bytes(tagged_hash(tag_prefix + "/nonce", t + P.to_bytes_xonly() + msg))
+ % GE.ORDER
+ )
+ if k0 == 0:
+ raise RuntimeError("Failure. This happens only with negligible probability.")
+ R = k0 * G
+ assert not R.infinity
+ k = k0 if R.has_even_y() else GE.ORDER - k0
+ e = (
+ int_from_bytes(
+ tagged_hash(
+ tag_prefix + "/challenge", R.to_bytes_xonly() + P.to_bytes_xonly() + msg
+ )
+ )
+ % GE.ORDER
+ )
+ sig = R.to_bytes_xonly() + bytes_from_int((k + e * d) % GE.ORDER)
+ assert schnorr_verify(msg, P.to_bytes_xonly(), sig, tag_prefix=tag_prefix)
+ return sig
+
+
+def schnorr_verify(
+ msg: bytes, pubkey: bytes, sig: bytes, tag_prefix: str = "BIP0340"
+) -> bool:
+ if len(pubkey) != 32:
+ raise ValueError("The public key must be a 32-byte array.")
+ if len(sig) != 64:
+ raise ValueError("The signature must be a 64-byte array.")
+ try:
+ P = GE.lift_x(int_from_bytes(pubkey))
+ except ValueError:
+ return False
+ r = int_from_bytes(sig[0:32])
+ s = int_from_bytes(sig[32:64])
+ if (r >= FE.SIZE) or (s >= GE.ORDER):
+ return False
+ e = (
+ int_from_bytes(tagged_hash(tag_prefix + "/challenge", sig[0:32] + pubkey + msg))
+ % GE.ORDER
+ )
+ R = s * G - e * P
+ if R.infinity or (not R.has_even_y()) or (R.x != r):
+ return False
+ return True
diff --git a/src/jmfrost/secp256k1proto/ecdh.py b/src/jmfrost/secp256k1proto/ecdh.py
new file mode 100644
index 0000000..6660c2d
--- /dev/null
+++ b/src/jmfrost/secp256k1proto/ecdh.py
@@ -0,0 +1,16 @@
+import hashlib
+
+from .secp256k1 import GE, Scalar
+
+
+def ecdh_compressed_in_raw_out(seckey: bytes, pubkey: bytes) -> GE:
+ """TODO"""
+ shared_secret = Scalar.from_bytes(seckey) * GE.from_bytes_compressed(pubkey)
+ assert not shared_secret.infinity # prime-order group
+ return shared_secret
+
+
+def ecdh_libsecp256k1(seckey: bytes, pubkey: bytes) -> bytes:
+ """TODO"""
+ shared_secret = ecdh_compressed_in_raw_out(seckey, pubkey)
+ return hashlib.sha256(shared_secret.to_bytes_compressed()).digest()
diff --git a/src/jmfrost/secp256k1proto/keys.py b/src/jmfrost/secp256k1proto/keys.py
new file mode 100644
index 0000000..3e28897
--- /dev/null
+++ b/src/jmfrost/secp256k1proto/keys.py
@@ -0,0 +1,15 @@
+from .secp256k1 import GE, G
+from .util import int_from_bytes
+
+# The following function is based on the BIP 327 reference implementation
+# https://github.com/bitcoin/bips/blob/master/bip-0327/reference.py
+
+
+# Return the plain public key corresponding to a given secret key
+def pubkey_gen_plain(seckey: bytes) -> bytes:
+ d0 = int_from_bytes(seckey)
+ if not (1 <= d0 <= GE.ORDER - 1):
+ raise ValueError("The secret key must be an integer in the range 1..n-1.")
+ P = d0 * G
+ assert not P.infinity
+ return P.to_bytes_compressed()
diff --git a/src/jmfrost/secp256k1proto/secp256k1.py b/src/jmfrost/secp256k1proto/secp256k1.py
new file mode 100644
index 0000000..2cf7a0f
--- /dev/null
+++ b/src/jmfrost/secp256k1proto/secp256k1.py
@@ -0,0 +1,438 @@
+# Copyright (c) 2022-2023 The Bitcoin Core developers
+# Distributed under the MIT software license, see the accompanying
+# file COPYING or http://www.opensource.org/licenses/mit-license.php.
+
+"""Test-only implementation of low-level secp256k1 field and group arithmetic
+
+It is designed for ease of understanding, not performance.
+
+WARNING: This code is slow and trivially vulnerable to side channel attacks. Do not use for
+anything but tests.
+
+Exports:
+* FE: class for secp256k1 field elements
+* GE: class for secp256k1 group elements
+* G: the secp256k1 generator point
+"""
+
+# TODO Docstrings of methods still say "field element"
+class APrimeFE:
+ """Objects of this class represent elements of a prime field.
+
+ They are represented internally in numerator / denominator form, in order to delay inversions.
+ """
+
+ # The size of the field (also its modulus and characteristic).
+ SIZE: int
+
+ def __init__(self, a=0, b=1):
+ """Initialize a field element a/b; both a and b can be ints or field elements."""
+ if isinstance(a, type(self)):
+ num = a._num
+ den = a._den
+ else:
+ num = a % self.SIZE
+ den = 1
+ if isinstance(b, type(self)):
+ den = (den * b._num) % self.SIZE
+ num = (num * b._den) % self.SIZE
+ else:
+ den = (den * b) % self.SIZE
+ assert den != 0
+ if num == 0:
+ den = 1
+ self._num = num
+ self._den = den
+
+ def __add__(self, a):
+ """Compute the sum of two field elements (second may be int)."""
+ if isinstance(a, type(self)):
+ return type(self)(self._num * a._den + self._den * a._num, self._den * a._den)
+ if isinstance(a, int):
+ return type(self)(self._num + self._den * a, self._den)
+ return NotImplemented
+
+ def __radd__(self, a):
+ """Compute the sum of an integer and a field element."""
+ return type(self)(a) + self
+
+ @classmethod
+ # REVIEW This should be
+ # def sum(cls, *es: Iterable[Self]) -> Self:
+ # but Self needs the typing_extension package on Python <= 3.12.
+ def sum(cls, *es):
+ """Compute the sum of field elements.
+
+ sum(a, b, c, ...) is identical to (0 + a + b + c + ...)."""
+ return sum(es, start=cls(0))
+
+ def __sub__(self, a):
+ """Compute the difference of two field elements (second may be int)."""
+ if isinstance(a, type(self)):
+ return type(self)(self._num * a._den - self._den * a._num, self._den * a._den)
+ if isinstance(a, int):
+ return type(self)(self._num - self._den * a, self._den)
+ return NotImplemented
+
+ def __rsub__(self, a):
+ """Compute the difference of an integer and a field element."""
+ return type(self)(a) - self
+
+ def __mul__(self, a):
+ """Compute the product of two field elements (second may be int)."""
+ if isinstance(a, type(self)):
+ return type(self)(self._num * a._num, self._den * a._den)
+ if isinstance(a, int):
+ return type(self)(self._num * a, self._den)
+ return NotImplemented
+
+ def __rmul__(self, a):
+ """Compute the product of an integer with a field element."""
+ return type(self)(a) * self
+
+ def __truediv__(self, a):
+ """Compute the ratio of two field elements (second may be int)."""
+ if isinstance(a, type(self)) or isinstance(a, int):
+ return type(self)(self, a)
+ return NotImplemented
+
+ def __pow__(self, a):
+ """Raise a field element to an integer power."""
+ return type(self)(pow(self._num, a, self.SIZE), pow(self._den, a, self.SIZE))
+
+ def __neg__(self):
+ """Negate a field element."""
+ return type(self)(-self._num, self._den)
+
+ def __int__(self):
+ """Convert a field element to an integer in range 0..SIZE-1. The result is cached."""
+ if self._den != 1:
+ self._num = (self._num * pow(self._den, -1, self.SIZE)) % self.SIZE
+ self._den = 1
+ return self._num
+
+ def sqrt(self):
+ """Compute the square root of a field element if it exists (None otherwise)."""
+ raise NotImplementedError
+
+ def is_square(self):
+ """Determine if this field element has a square root."""
+ # A more efficient algorithm is possible here (Jacobi symbol).
+ return self.sqrt() is not None
+
+ def is_even(self):
+ """Determine whether this field element, represented as integer in 0..SIZE-1, is even."""
+ return int(self) & 1 == 0
+
+ def __eq__(self, a):
+ """Check whether two field elements are equal (second may be an int)."""
+ if isinstance(a, type(self)):
+ return (self._num * a._den - self._den * a._num) % self.SIZE == 0
+ return (self._num - self._den * a) % self.SIZE == 0
+
+ def to_bytes(self):
+ """Convert a field element to a 32-byte array (BE byte order)."""
+ return int(self).to_bytes(32, 'big')
+
+ @classmethod
+ def from_bytes(cls, b):
+ """Convert a 32-byte array to a field element (BE byte order, no overflow allowed)."""
+ v = int.from_bytes(b, 'big')
+ if v >= cls.SIZE:
+ raise ValueError
+ return cls(v)
+
+ def __str__(self):
+ """Convert this field element to a 64 character hex string."""
+ return f"{int(self):064x}"
+
+ def __repr__(self):
+ """Get a string representation of this field element."""
+ return f"{type(self).__qualname__}(0x{int(self):x})"
+
+
+class FE(APrimeFE):
+ SIZE = 2**256 - 2**32 - 977
+
+ def sqrt(self):
+ # Due to the fact that our modulus p is of the form (p % 4) == 3, the Tonelli-Shanks
+ # algorithm (https://en.wikipedia.org/wiki/Tonelli-Shanks_algorithm) is simply
+ # raising the argument to the power (p + 1) / 4.
+
+ # To see why: (p-1) % 2 = 0, so 2 divides the order of the multiplicative group,
+ # and thus only half of the non-zero field elements are squares. An element a is
+ # a (nonzero) square when Euler's criterion, a^((p-1)/2) = 1 (mod p), holds. We're
+ # looking for x such that x^2 = a (mod p). Given a^((p-1)/2) = 1, that is equivalent
+ # to x^2 = a^(1 + (p-1)/2) mod p. As (1 + (p-1)/2) is even, this is equivalent to
+ # x = a^((1 + (p-1)/2)/2) mod p, or x = a^((p+1)/4) mod p.
+ v = int(self)
+ s = pow(v, (self.SIZE + 1) // 4, self.SIZE)
+ if s**2 % self.SIZE == v:
+ return type(self)(s)
+ return None
+
+
+class Scalar(APrimeFE):
+ """TODO Docstring"""
+ SIZE = 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141
+
+
+class GE:
+ """Objects of this class represent secp256k1 group elements (curve points or infinity)
+
+ GE objects are immutable.
+
+ Normal points on the curve have fields:
+ * x: the x coordinate (a field element)
+ * y: the y coordinate (a field element, satisfying y^2 = x^3 + 7)
+ * infinity: False
+
+ The point at infinity has field:
+ * infinity: True
+ """
+
+ # TODO The following two class attributes should probably be just getters as
+ # classmethods to enforce immutability. Unfortunately Python makes it hard
+ # to create "classproperties". `G` could then also be just a classmethod.
+
+ # Order of the group (number of points on the curve, plus 1 for infinity)
+ ORDER = Scalar.SIZE
+
+ # Number of valid distinct x coordinates on the curve.
+ ORDER_HALF = ORDER // 2
+
+ @property
+ def infinity(self):
+ """Whether the group element is the point at infinity."""
+ return self._infinity
+
+ @property
+ def x(self):
+ """The x coordinate (a field element) of a non-infinite group element."""
+ assert not self.infinity
+ return self._x
+
+ @property
+ def y(self):
+ """The y coordinate (a field element) of a non-infinite group element."""
+ assert not self.infinity
+ return self._y
+
+ def __init__(self, x=None, y=None):
+ """Initialize a group element with specified x and y coordinates, or infinity."""
+ if x is None:
+ # Initialize as infinity.
+ assert y is None
+ self._infinity = True
+ else:
+ # Initialize as point on the curve (and check that it is).
+ fx = FE(x)
+ fy = FE(y)
+ assert fy**2 == fx**3 + 7
+ self._infinity = False
+ self._x = fx
+ self._y = fy
+
+ def __add__(self, a):
+ """Add two group elements together."""
+ # Deal with infinity: a + infinity == infinity + a == a.
+ if self.infinity:
+ return a
+ if a.infinity:
+ return self
+ if self.x == a.x:
+ if self.y != a.y:
+ # A point added to its own negation is infinity.
+ assert self.y + a.y == 0
+ return GE()
+ else:
+ # For identical inputs, use the tangent (doubling formula).
+ lam = (3 * self.x**2) / (2 * self.y)
+ else:
+ # For distinct inputs, use the line through both points (adding formula).
+ lam = (self.y - a.y) / (self.x - a.x)
+ # Determine point opposite to the intersection of that line with the curve.
+ x = lam**2 - (self.x + a.x)
+ y = lam * (self.x - x) - self.y
+ return GE(x, y)
+
+ @staticmethod
+ def sum(*ps):
+ """Compute the sum of group elements.
+
+ GE.sum(a, b, c, ...) is identical to (GE() + a + b + c + ...)."""
+ return sum(ps, start=GE())
+
+ @staticmethod
+ def batch_mul(*aps):
+ """Compute a (batch) scalar group element multiplication.
+
+ GE.batch_mul((a1, p1), (a2, p2), (a3, p3)) is identical to a1*p1 + a2*p2 + a3*p3,
+ but more efficient."""
+ # Reduce all the scalars modulo order first (so we can deal with negatives etc).
+ naps = [(int(a), p) for a, p in aps]
+ # Start with point at infinity.
+ r = GE()
+ # Iterate over all bit positions, from high to low.
+ for i in range(255, -1, -1):
+ # Double what we have so far.
+ r = r + r
+ # Add then add the points for which the corresponding scalar bit is set.
+ for (a, p) in naps:
+ if (a >> i) & 1:
+ r += p
+ return r
+
+ def __rmul__(self, a):
+ """Multiply an integer with a group element."""
+ if self == G:
+ return FAST_G.mul(Scalar(a))
+ return GE.batch_mul((Scalar(a), self))
+
+ def __neg__(self):
+ """Compute the negation of a group element."""
+ if self.infinity:
+ return self
+ return GE(self.x, -self.y)
+
+ def __sub__(self, a):
+ """Subtract a group element from another."""
+ return self + (-a)
+
+ def __eq__(self, a):
+ """Check if two group elements are equal."""
+ return (self - a).infinity
+
+ def has_even_y(self):
+ """Determine whether a non-infinity group element has an even y coordinate."""
+ assert not self.infinity
+ return self.y.is_even()
+
+ def to_bytes_compressed(self):
+ """Convert a non-infinite group element to 33-byte compressed encoding."""
+ assert not self.infinity
+ return bytes([3 - self.y.is_even()]) + self.x.to_bytes()
+
+ def to_bytes_compressed_with_infinity(self):
+ """Convert a group element to 33-byte compressed encoding, mapping infinity to zeros."""
+ if self.infinity:
+ return 33 * b"\x00"
+ return self.to_bytes_compressed()
+
+ def to_bytes_uncompressed(self):
+ """Convert a non-infinite group element to 65-byte uncompressed encoding."""
+ assert not self.infinity
+ return b'\x04' + self.x.to_bytes() + self.y.to_bytes()
+
+ def to_bytes_xonly(self):
+ """Convert (the x coordinate of) a non-infinite group element to 32-byte xonly encoding."""
+ assert not self.infinity
+ return self.x.to_bytes()
+
+ @staticmethod
+ def lift_x(x):
+ """Return group element with specified field element as x coordinate (and even y)."""
+ y = (FE(x)**3 + 7).sqrt()
+ if y is None:
+ raise ValueError
+ if not y.is_even():
+ y = -y
+ return GE(x, y)
+
+ @staticmethod
+ def from_bytes_compressed(b):
+ """Convert a compressed to a group element."""
+ assert len(b) == 33
+ if b[0] != 2 and b[0] != 3:
+ raise ValueError
+ x = FE.from_bytes(b[1:])
+ r = GE.lift_x(x)
+ if b[0] == 3:
+ r = -r
+ return r
+
+ @staticmethod
+ def from_bytes_uncompressed(b):
+ """Convert an uncompressed to a group element."""
+ assert len(b) == 65
+ if b[0] != 4:
+ raise ValueError
+ x = FE.from_bytes(b[1:33])
+ y = FE.from_bytes(b[33:])
+ if y**2 != x**3 + 7:
+ raise ValueError
+ return GE(x, y)
+
+ @staticmethod
+ def from_bytes(b):
+ """Convert a compressed or uncompressed encoding to a group element."""
+ assert len(b) in (33, 65)
+ if len(b) == 33:
+ return GE.from_bytes_compressed(b)
+ else:
+ return GE.from_bytes_uncompressed(b)
+
+ @staticmethod
+ def from_bytes_xonly(b):
+ """Convert a point given in xonly encoding to a group element."""
+ assert len(b) == 32
+ x = FE.from_bytes(b)
+ r = GE.lift_x(x)
+ return r
+
+ @staticmethod
+ def is_valid_x(x):
+ """Determine whether the provided field element is a valid X coordinate."""
+ return (FE(x)**3 + 7).is_square()
+
+ def __str__(self):
+ """Convert this group element to a string."""
+ if self.infinity:
+ return "(inf)"
+ return f"({self.x},{self.y})"
+
+ def __repr__(self):
+ """Get a string representation for this group element."""
+ if self.infinity:
+ return "GE()"
+ return f"GE(0x{int(self.x):x},0x{int(self.y):x})"
+
+ def __hash__(self):
+ """Compute a non-cryptographic hash of the group element."""
+ if self.infinity:
+ return 0 # 0 is not a valid x coordinate
+ return int(self.x)
+
+
+# The secp256k1 generator point
+G = GE.lift_x(0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798)
+
+
+class FastGEMul:
+ """Table for fast multiplication with a constant group element.
+
+ Speed up scalar multiplication with a fixed point P by using a precomputed lookup table with
+ its powers of 2:
+
+ table = [P, 2*P, 4*P, (2^3)*P, (2^4)*P, ..., (2^255)*P]
+
+ During multiplication, the points corresponding to each bit set in the scalar are added up,
+ i.e. on average ~128 point additions take place.
+ """
+
+ def __init__(self, p):
+ self.table = [p] # table[i] = (2^i) * p
+ for _ in range(255):
+ p = p + p
+ self.table.append(p)
+
+ def mul(self, a):
+ result = GE()
+ a = int(a)
+ for bit in range(a.bit_length()):
+ if a & (1 << bit):
+ result += self.table[bit]
+ return result
+
+# Precomputed table with multiples of G for fast multiplication
+FAST_G = FastGEMul(G)
diff --git a/src/jmfrost/secp256k1proto/util.py b/src/jmfrost/secp256k1proto/util.py
new file mode 100644
index 0000000..d8c744b
--- /dev/null
+++ b/src/jmfrost/secp256k1proto/util.py
@@ -0,0 +1,24 @@
+import hashlib
+
+
+# This implementation can be sped up by storing the midstate after hashing
+# tag_hash instead of rehashing it all the time.
+def tagged_hash(tag: str, msg: bytes) -> bytes:
+ tag_hash = hashlib.sha256(tag.encode()).digest()
+ return hashlib.sha256(tag_hash + tag_hash + msg).digest()
+
+
+def bytes_from_int(x: int) -> bytes:
+ return x.to_bytes(32, byteorder="big")
+
+
+def xor_bytes(b0: bytes, b1: bytes) -> bytes:
+ return bytes(x ^ y for (x, y) in zip(b0, b1))
+
+
+def int_from_bytes(b: bytes) -> int:
+ return int.from_bytes(b, byteorder="big")
+
+
+def hash_sha256(b: bytes) -> bytes:
+ return hashlib.sha256(b).digest()
diff --git a/src/jmqtui/_compile.py b/src/jmqtui/_compile.py
index 278c1bc..77f3245 100644
--- a/src/jmqtui/_compile.py
+++ b/src/jmqtui/_compile.py
@@ -4,4 +4,4 @@ import os
# `gui-dev` dependencies must be installed prior to execution.
def compile_ui():
- os.system('pyside2-uic jmqtui/open_wallet_dialog.ui -o jmqtui/open_wallet_dialog.py')
+ os.system('pyside6-uic jmqtui/open_wallet_dialog.ui -o jmqtui/open_wallet_dialog.py')
diff --git a/src/jmqtui/open_wallet_dialog.py b/src/jmqtui/open_wallet_dialog.py
index bfaafce..9bde99e 100644
--- a/src/jmqtui/open_wallet_dialog.py
+++ b/src/jmqtui/open_wallet_dialog.py
@@ -8,12 +8,12 @@
## WARNING! All changes made in this file will be lost when recompiling UI file!
################################################################################
-from PySide2.QtCore import (QCoreApplication, QDate, QDateTime, QMetaObject,
+from PySide6.QtCore import (QCoreApplication, QDate, QDateTime, QMetaObject,
QObject, QPoint, QRect, QSize, QTime, QUrl, Qt)
-from PySide2.QtGui import (QBrush, QColor, QConicalGradient, QCursor, QFont,
+from PySide6.QtGui import (QBrush, QColor, QConicalGradient, QCursor, QFont,
QFontDatabase, QIcon, QKeySequence, QLinearGradient, QPalette, QPainter,
QPixmap, QRadialGradient)
-from PySide2.QtWidgets import *
+from PySide6.QtWidgets import *
class Ui_OpenWalletDialog(object):
@@ -108,7 +108,7 @@ class Ui_OpenWalletDialog(object):
self.errorMessageLabel.setPalette(palette)
font = QFont()
font.setBold(True)
- font.setWeight(75)
+ font.setWeight(QFont.Weight.Bold)
self.errorMessageLabel.setFont(font)
self.horizontalLayout_4.addWidget(self.errorMessageLabel)
diff --git a/test/jmfrost/chilldkg_example.py b/test/jmfrost/chilldkg_example.py
new file mode 100755
index 0000000..1ac5e66
--- /dev/null
+++ b/test/jmfrost/chilldkg_example.py
@@ -0,0 +1,301 @@
+#!/usr/bin/env python3
+
+"""Example of a full ChillDKG session"""
+
+from typing import Tuple, List, Optional
+import asyncio
+import pprint
+from random import randint
+from secrets import token_bytes as random_bytes
+import sys
+import argparse
+
+from jmfrost.chilldkg_ref.chilldkg import (
+ params_id,
+ hostpubkey_gen,
+ participant_step1,
+ participant_step2,
+ participant_finalize,
+ participant_investigate,
+ coordinator_step1,
+ coordinator_finalize,
+ coordinator_investigate,
+ SessionParams,
+ DKGOutput,
+ RecoveryData,
+ FaultyParticipantOrCoordinatorError,
+ UnknownFaultyParticipantOrCoordinatorError,
+)
+
+#
+# Network mocks to simulate full DKG sessions
+#
+
+
+class CoordinatorChannels:
+ def __init__(self, n):
+ self.n = n
+ self.queues = []
+ for i in range(n):
+ self.queues += [asyncio.Queue()]
+
+ def set_participant_queues(self, participant_queues):
+ self.participant_queues = participant_queues
+
+ def send_to(self, i, m):
+ assert self.participant_queues is not None
+ self.participant_queues[i].put_nowait(m)
+
+ def send_all(self, m):
+ assert self.participant_queues is not None
+ for i in range(self.n):
+ self.participant_queues[i].put_nowait(m)
+
+ async def receive_from(self, i):
+ item = await self.queues[i].get()
+ return item
+
+
+class ParticipantChannel:
+ def __init__(self, coord_queue):
+ self.queue = asyncio.Queue()
+ self.coord_queue = coord_queue
+
+ # Send m to coordinator
+ def send(self, m):
+ self.coord_queue.put_nowait(m)
+
+ async def receive(self):
+ item = await self.queue.get()
+ return item
+
+
+#
+# Helper functions
+#
+
+
+def pphex(thing):
+ """Pretty print an object with bytes as hex strings"""
+
+ def hexlify(thing):
+ if isinstance(thing, bytes):
+ return thing.hex()
+ if isinstance(thing, dict):
+ return {k: hexlify(v) for k, v in thing.items()}
+ if hasattr(thing, "_asdict"): # NamedTuple
+ return hexlify(thing._asdict())
+ if isinstance(thing, List):
+ return [hexlify(v) for v in thing]
+ return thing
+
+ pprint.pp(hexlify(thing))
+
+
+#
+# Protocol parties
+#
+
+
+async def participant(
+ chan: ParticipantChannel,
+ hostseckey: bytes,
+ params: SessionParams,
+ investigation_procedure: bool,
+) -> Tuple[DKGOutput, RecoveryData]:
+ # TODO Top-level error handling
+ random = random_bytes(32)
+ state1, pmsg1 = participant_step1(hostseckey, params, random)
+
+ chan.send(pmsg1)
+ cmsg1 = await chan.receive()
+
+ # Participants can implement an optional investigation procedure. This
+ # allows the participant to determine which participant is faulty when an
+ # `UnknownFaultyParticipantOrCoordinatorError` is raised. The investiation
+ # procedure requires the participant to receive an extra "investigation
+ # message" from the coordinator that contains necessary information.
+ #
+ # In this example, if the investigation procedure is enabled, the
+ # participant expects the coordinator to send a investigation message.
+ # Alternatively, an implementation of the participant can explicitly request
+ # the investigation message only if participant_step2 fails.
+ if investigation_procedure:
+ cinv = await chan.receive()
+
+ try:
+ state2, eq_round1 = participant_step2(hostseckey, state1, cmsg1)
+ except UnknownFaultyParticipantOrCoordinatorError as e:
+ if investigation_procedure:
+ participant_investigate(e, cinv)
+ else:
+ # If this participant does not implement the investigation
+ # procedure, it cannot determine which party is faulty. Re-raise
+ # UnknownFaultyPartyError in this case.
+ raise
+
+ chan.send(eq_round1)
+ cmsg2 = await chan.receive()
+
+ return participant_finalize(state2, cmsg2)
+
+
+async def coordinator(
+ chans: CoordinatorChannels, params: SessionParams, investigation_procedure: bool
+) -> Tuple[DKGOutput, RecoveryData]:
+ (hostpubkeys, t) = params
+ n = len(hostpubkeys)
+
+ pmsgs1 = []
+ for i in range(n):
+ pmsgs1.append(await chans.receive_from(i))
+ state, cmsg1 = coordinator_step1(pmsgs1, params)
+ chans.send_all(cmsg1)
+
+ # If the coordinator implements the investigation procedure and it is
+ # enabled, it sends an extra message to the participants.
+ if investigation_procedure:
+ inv_msgs = coordinator_investigate(pmsgs1)
+ for i in range(n):
+ chans.send_to(i, inv_msgs[i])
+
+ sigs = []
+ for i in range(n):
+ sigs += [await chans.receive_from(i)]
+ cmsg2, dkg_output, recovery_data = coordinator_finalize(state, sigs)
+ chans.send_all(cmsg2)
+
+ return dkg_output, recovery_data
+
+
+#
+# DKG Session
+#
+
+
+# This is a dummy participant used to demonstrate the investigation procedure.
+# It picks a random victim participant and sends an invalid share to it.
+async def faulty_participant(
+ chan: ParticipantChannel, hostseckey: bytes, params: SessionParams, idx: int
+):
+ random = random_bytes(32)
+ _, pmsg1 = participant_step1(hostseckey, params, random)
+
+ n = len(pmsg1.enc_pmsg.enc_shares)
+ # Pick random victim that is not this participant
+ victim = (idx + randint(1, n - 1)) % n
+ pmsg1.enc_pmsg.enc_shares[victim] += 17
+
+ chan.send(pmsg1)
+
+
+def simulate_chilldkg_full(
+ hostseckeys: List[bytes], params: SessionParams, faulty_idx: Optional[int]
+) -> List[Optional[Tuple[DKGOutput, RecoveryData]]]:
+ n = len(hostseckeys)
+ assert n == len(params.hostpubkeys)
+
+ # For demonstration purposes, we enable the investigation pro if a participant is
+ # faulty.
+ investigation_procedure = faulty_idx is not None
+
+ async def session():
+ coord_chans = CoordinatorChannels(n)
+ participant_chans = [
+ ParticipantChannel(coord_chans.queues[i]) for i in range(n)
+ ]
+ coord_chans.set_participant_queues(
+ [participant_chans[i].queue for i in range(n)]
+ )
+ coroutines = [coordinator(coord_chans, params, investigation_procedure)] + [
+ participant(
+ participant_chans[i], hostseckeys[i], params, investigation_procedure
+ )
+ if i != faulty_idx
+ else faulty_participant(participant_chans[i], hostseckeys[i], params, i)
+ for i in range(n)
+ ]
+ return await asyncio.gather(*coroutines)
+
+ outputs = asyncio.run(session())
+ return outputs
+
+
+def main():
+ parser = argparse.ArgumentParser(description="ChillDKG example")
+ parser.add_argument(
+ "--faulty-participant",
+ action="store_true",
+ help="When this flag is set, one random participant will send an invalid message, and the investigation procedure will be enabled for other participants and the coordinator.",
+ )
+ parser.add_argument(
+ "t", nargs="?", type=int, default=2, help="Signing threshold [default = 2]"
+ )
+ parser.add_argument(
+ "n", nargs="?", type=int, default=3, help="Number of participants [default = 3]"
+ )
+ args = parser.parse_args()
+ t = args.t
+ n = args.n
+ if args.faulty_participant:
+ faulty_idx = randint(0, n - 1)
+ else:
+ faulty_idx = None
+
+ print("====== ChillDKG example session ======")
+ print(f"Using n = {n} participants and a threshold of t = {t}.")
+ if faulty_idx is not None:
+ print(f"Participant {faulty_idx} is faulty.")
+ print()
+
+ # Generate common inputs for all participants and coordinator
+ hostseckeys = [random_bytes(32) for _ in range(n)]
+ hostpubkeys = []
+ for i in range(n):
+ hostpubkeys += [hostpubkey_gen(hostseckeys[i])]
+ params = SessionParams(hostpubkeys, t)
+
+ print("=== Host secret keys ===")
+ pphex(hostseckeys)
+ print()
+
+ print("=== Session parameters ===")
+ pphex(params)
+ print()
+ print(f"Session parameters identifier: {params_id(params).hex()}")
+ print()
+
+ try:
+ rets = simulate_chilldkg_full(hostseckeys, params, faulty_idx)
+ except FaultyParticipantOrCoordinatorError as e:
+ print(
+ f"A participant has failed and is blaming either participant {e.participant} or the coordinator."
+ )
+ # If the blamed participant is the faulty participant, exit with code 0.
+ # Otherwise, re-raise the exception.
+ if faulty_idx == e.participant:
+ return 0
+ else:
+ raise
+
+ assert len(rets) == n + 1
+ print("=== Coordinator's DKG output ===")
+ dkg_output, _ = rets[0]
+ pphex(dkg_output)
+ print()
+
+ for i in range(n):
+ print(f"=== Participant {i}'s DKG output ===")
+ dkg_output, _ = rets[i + 1]
+ pphex(dkg_output)
+ print()
+
+ # Check that all RecoveryData of all parties is identical
+ assert len(set([rets[i][1] for i in range(n + 1)])) == 1
+ recovery_data = rets[0][1]
+ print(f"=== Common recovery data ({len(recovery_data)} bytes) ===")
+ print(recovery_data.hex())
+
+
+if __name__ == "__main__":
+ sys.exit(main())
diff --git a/test/jmfrost/test_chilldkg_ref.py b/test/jmfrost/test_chilldkg_ref.py
new file mode 100755
index 0000000..af0c607
--- /dev/null
+++ b/test/jmfrost/test_chilldkg_ref.py
@@ -0,0 +1,395 @@
+#!/usr/bin/env python3
+
+"""Tests for ChillDKG reference implementation"""
+
+import pytest
+from itertools import combinations
+from random import randint
+from typing import Tuple, List, Optional
+from secrets import token_bytes as random_bytes
+
+from jmfrost.secp256k1proto.secp256k1 import GE, G, Scalar
+from jmfrost.secp256k1proto.keys import pubkey_gen_plain
+
+from jmfrost.chilldkg_ref.util import (
+ FaultyParticipantOrCoordinatorError,
+ FaultyCoordinatorError,
+ UnknownFaultyParticipantOrCoordinatorError,
+ tagged_hash_bip_dkg,
+)
+from jmfrost.chilldkg_ref.vss import Polynomial, VSS, VSSCommitment
+import jmfrost.chilldkg_ref.simplpedpop as simplpedpop
+import jmfrost.chilldkg_ref.encpedpop as encpedpop
+import jmfrost.chilldkg_ref.chilldkg as chilldkg
+
+from chilldkg_example import (
+ simulate_chilldkg_full as simulate_chilldkg_full_example)
+
+
+def test_chilldkg_params_validate():
+ hostseckeys = [random_bytes(32) for _ in range(3)]
+ hostpubkeys = [chilldkg.hostpubkey_gen(hostseckey) for hostseckey in hostseckeys]
+
+ with_duplicate = [hostpubkeys[0], hostpubkeys[1], hostpubkeys[2], hostpubkeys[1]]
+ params_with_duplicate = chilldkg.SessionParams(with_duplicate, 2)
+ try:
+ _ = chilldkg.params_id(params_with_duplicate)
+ except chilldkg.DuplicateHostPubkeyError as e:
+ assert {e.participant1, e.participant2} == {1, 3}
+ else:
+ assert False, "Expected exception"
+
+ invalid_hostpubkey = b"\x03" + 31 * b"\x00" + b"\x05" # Invalid x-coordinate
+ params_with_invalid = chilldkg.SessionParams(
+ [hostpubkeys[1], invalid_hostpubkey, hostpubkeys[2]], 1
+ )
+ try:
+ _ = chilldkg.params_id(params_with_invalid)
+ except chilldkg.InvalidHostPubkeyError as e:
+ assert e.participant == 1
+ pass
+ else:
+ assert False, "Expected exception"
+
+ try:
+ _ = chilldkg.params_id(
+ chilldkg.SessionParams(hostpubkeys, len(hostpubkeys) + 1)
+ )
+ except chilldkg.ThresholdOrCountError:
+ pass
+ else:
+ assert False, "Expected exception"
+
+ try:
+ _ = chilldkg.params_id(chilldkg.SessionParams(hostpubkeys, -2))
+ except chilldkg.ThresholdOrCountError:
+ pass
+ else:
+ assert False, "Expected exception"
+
+
+def test_vss_correctness():
+ def rand_polynomial(t):
+ return Polynomial([randint(1, GE.ORDER - 1) for _ in range(1, t + 1)])
+
+ for t in range(1, 3):
+ for n in range(t, 2 * t + 1):
+ f = rand_polynomial(t)
+ vss = VSS(f)
+ secshares = vss.secshares(n)
+ assert len(secshares) == n
+ assert all(
+ VSSCommitment.verify_secshare(secshares[i], vss.commit().pubshare(i))
+ for i in range(n)
+ )
+
+
+def simulate_simplpedpop(
+ seeds, t, investigation: bool
+) -> Optional[List[Tuple[simplpedpop.DKGOutput, bytes]]]:
+ n = len(seeds)
+ prets = []
+ for i in range(n):
+ prets += [simplpedpop.participant_step1(seeds[i], t, n, i)]
+
+ pstates = [pstate for (pstate, _, _) in prets]
+ pmsgs = [pmsg for (_, pmsg, _) in prets]
+
+ cmsg, cout, ceq = simplpedpop.coordinator_step(pmsgs, t, n)
+ pre_finalize_rets = [(cout, ceq)]
+ for i in range(n):
+ partial_secshares = [
+ partial_secshares_for[i] for (_, _, partial_secshares_for) in prets
+ ]
+ if investigation:
+ # Let a random participant send incorrect shares to participant i.
+ faulty_idx = randint(0, n - 1)
+ partial_secshares[faulty_idx] += Scalar(17)
+
+ secshare = simplpedpop.participant_step2_prepare_secshare(partial_secshares)
+ try:
+ pre_finalize_rets += [
+ simplpedpop.participant_step2(pstates[i], cmsg, secshare)
+ ]
+ except UnknownFaultyParticipantOrCoordinatorError as e:
+ if not investigation:
+ raise
+ inv_msgs = simplpedpop.coordinator_investigate(pmsgs)
+ assert len(inv_msgs) == len(pmsgs)
+ try:
+ simplpedpop.participant_investigate(e, inv_msgs[i], partial_secshares)
+ # If we're not faulty, we should blame the faulty party.
+ except FaultyParticipantOrCoordinatorError as e:
+ assert i != faulty_idx
+ assert e.participant == faulty_idx
+ # If we're faulty, we'll blame the coordinator.
+ except FaultyCoordinatorError:
+ assert i == faulty_idx
+ return None
+ return pre_finalize_rets
+
+
+def encpedpop_keys(seed: bytes) -> Tuple[bytes, bytes]:
+ deckey = tagged_hash_bip_dkg("encpedpop deckey", seed)
+ enckey = pubkey_gen_plain(deckey)
+ return deckey, enckey
+
+
+def simulate_encpedpop(
+ seeds, t, investigation: bool
+) -> Optional[List[Tuple[simplpedpop.DKGOutput, bytes]]]:
+ n = len(seeds)
+ enc_prets0 = []
+ enc_prets1 = []
+ for i in range(n):
+ enc_prets0 += [encpedpop_keys(seeds[i])]
+
+ enckeys = [pret[1] for pret in enc_prets0]
+ for i in range(n):
+ deckey = enc_prets0[i][0]
+ random = random_bytes(32)
+ enc_prets1 += [
+ encpedpop.participant_step1(seeds[i], deckey, enckeys, t, i, random)
+ ]
+
+ pstates = [pstate for (pstate, _) in enc_prets1]
+ pmsgs = [pmsg for (_, pmsg) in enc_prets1]
+ if investigation:
+ faulty_idx: List[int] = []
+ for i in range(n):
+ # Let a random participant faulty_idx[i] send incorrect shares to i.
+ faulty_idx[i:] = [randint(0, n - 1)]
+ pmsgs[faulty_idx[i]].enc_shares[i] += Scalar(17)
+
+ cmsg, cout, ceq, enc_secshares = encpedpop.coordinator_step(pmsgs, t, enckeys)
+ pre_finalize_rets = [(cout, ceq)]
+ for i in range(n):
+ deckey = enc_prets0[i][0]
+ try:
+ pre_finalize_rets += [
+ encpedpop.participant_step2(pstates[i], deckey, cmsg, enc_secshares[i])
+ ]
+ except UnknownFaultyParticipantOrCoordinatorError as e:
+ if not investigation:
+ raise
+ inv_msgs = encpedpop.coordinator_investigate(pmsgs)
+ assert len(inv_msgs) == len(pmsgs)
+ try:
+ encpedpop.participant_investigate(e, inv_msgs[i])
+ # If we're not faulty, we should blame the faulty party.
+ except FaultyParticipantOrCoordinatorError as e:
+ assert i != faulty_idx[i]
+ assert e.participant == faulty_idx[i]
+ # If we're faulty, we'll blame the coordinator.
+ except FaultyCoordinatorError:
+ assert i == faulty_idx[i]
+ return None
+ return pre_finalize_rets
+
+
+def simulate_chilldkg(
+ hostseckeys, t, investigation: bool
+) -> Optional[List[Tuple[chilldkg.DKGOutput, chilldkg.RecoveryData]]]:
+ n = len(hostseckeys)
+
+ hostpubkeys = []
+ for i in range(n):
+ hostpubkeys += [chilldkg.hostpubkey_gen(hostseckeys[i])]
+
+ params = chilldkg.SessionParams(hostpubkeys, t)
+
+ prets1 = []
+ for i in range(n):
+ random = random_bytes(32)
+ prets1 += [chilldkg.participant_step1(hostseckeys[i], params, random)]
+
+ pstates1 = [pret[0] for pret in prets1]
+ pmsgs = [pret[1] for pret in prets1]
+ if investigation:
+ faulty_idx: List[int] = []
+ for i in range(n):
+ # Let a random participant faulty_idx[i] send incorrect shares to i.
+ faulty_idx[i:] = [randint(0, n - 1)]
+ pmsgs[faulty_idx[i]].enc_pmsg.enc_shares[i] += Scalar(17)
+
+ cstate, cmsg1 = chilldkg.coordinator_step1(pmsgs, params)
+
+ prets2 = []
+ for i in range(n):
+ try:
+ prets2 += [chilldkg.participant_step2(hostseckeys[i], pstates1[i], cmsg1)]
+ except UnknownFaultyParticipantOrCoordinatorError as e:
+ if not investigation:
+ raise
+ inv_msgs = chilldkg.coordinator_investigate(pmsgs)
+ assert len(inv_msgs) == len(pmsgs)
+ try:
+ chilldkg.participant_investigate(e, inv_msgs[i])
+ # If we're not faulty, we should blame the faulty party.
+ except FaultyParticipantOrCoordinatorError as e:
+ assert i != faulty_idx[i]
+ assert e.participant == faulty_idx[i]
+ # If we're faulty, we'll blame the coordinator.
+ except FaultyCoordinatorError:
+ assert i == faulty_idx[i]
+ return None
+
+ cmsg2, cout, crec = chilldkg.coordinator_finalize(
+ cstate, [pret[1] for pret in prets2]
+ )
+ outputs = [(cout, crec)]
+ for i in range(n):
+ out = chilldkg.participant_finalize(prets2[i][0], cmsg2)
+ assert out is not None
+ outputs += [out]
+
+ return outputs
+
+
+def simulate_chilldkg_full(
+ hostseckeys,
+ t,
+ investigation: bool,
+) -> List[Optional[Tuple[chilldkg.DKGOutput, chilldkg.RecoveryData]]]:
+ # Investigating is not supported by this wrapper
+ assert not investigation
+
+ hostpubkeys = []
+ n = len(hostseckeys)
+ for i in range(n):
+ hostpubkeys += [chilldkg.hostpubkey_gen(hostseckeys[i])]
+ params = chilldkg.SessionParams(hostpubkeys, t)
+ return simulate_chilldkg_full_example(hostseckeys, params, faulty_idx=None)
+
+
+def derive_interpolating_value(L, x_i):
+ assert x_i in L
+ assert all(L.count(x_j) <= 1 for x_j in L)
+ lam = Scalar(1)
+ for x_j in L:
+ x_j = Scalar(x_j)
+ x_i = Scalar(x_i)
+ if x_j == x_i:
+ continue
+ lam *= x_j / (x_j - x_i)
+ return lam
+
+
+def recover_secret(participant_indices, shares) -> Scalar:
+ interpolated_shares = []
+ t = len(shares)
+ assert len(participant_indices) == t
+ for i in range(t):
+ lam = derive_interpolating_value(participant_indices, participant_indices[i])
+ interpolated_shares += [(lam * shares[i])]
+ recovered_secret = Scalar.sum(*interpolated_shares)
+ return recovered_secret
+
+
+def test_recover_secret():
+ f = Polynomial([23, 42])
+ shares = [f(i) for i in [1, 2, 3]]
+ assert recover_secret([1, 2], [shares[0], shares[1]]) == f.coeffs[0]
+ assert recover_secret([1, 3], [shares[0], shares[2]]) == f.coeffs[0]
+ assert recover_secret([2, 3], [shares[1], shares[2]]) == f.coeffs[0]
+
+
+def check_correctness_dkg_output(t, n, dkg_outputs: List[simplpedpop.DKGOutput]):
+ assert len(dkg_outputs) == n + 1
+ secshares = [out[0] for out in dkg_outputs]
+ threshold_pubkeys = [out[1] for out in dkg_outputs]
+ pubshares = [out[2] for out in dkg_outputs]
+
+ # Check that the threshold pubkey and pubshares are the same for the
+ # coordinator (at [0]) and all participants (at [1:n + 1]).
+ for i in range(n + 1):
+ assert threshold_pubkeys[0] == threshold_pubkeys[i]
+ assert len(pubshares[i]) == n
+ assert pubshares[0] == pubshares[i]
+ threshold_pubkey = threshold_pubkeys[0]
+
+ # Check that the coordinator has no secret share
+ assert secshares[0] is None
+
+ # Check that each secshare matches the corresponding pubshare
+ secshares_scalar = [
+ None if secshare is None else Scalar.from_bytes(secshare)
+ for secshare in secshares
+ ]
+ for i in range(1, n + 1):
+ assert secshares_scalar[i] * G == GE.from_bytes_compressed(pubshares[0][i - 1])
+
+ # Check that all combinations of t participants can recover the threshold pubkey
+ for tsubset in combinations(range(1, n + 1), t):
+ recovered = recover_secret(tsubset, [secshares_scalar[i] for i in tsubset])
+ assert recovered * G == GE.from_bytes_compressed(threshold_pubkey)
+
+
+@pytest.mark.parametrize('t,n,simulate_dkg,recovery,investigation', [
+ [1, 1, simulate_simplpedpop, False, False],
+ [1, 1, simulate_simplpedpop, False, True],
+ [1, 1, simulate_encpedpop, False, False],
+ [1, 1, simulate_encpedpop, False, True],
+ [1, 1, simulate_chilldkg, True, False],
+ [1, 1, simulate_chilldkg, True, True],
+ [1, 1, simulate_chilldkg_full, True, False],
+
+ [1, 2, simulate_simplpedpop, False, False],
+ [1, 2, simulate_simplpedpop, False, True],
+ [1, 2, simulate_encpedpop, False, False],
+ [1, 2, simulate_encpedpop, False, True],
+ [1, 2, simulate_chilldkg, True, False],
+ [1, 2, simulate_chilldkg, True, True],
+ [1, 2, simulate_chilldkg_full, True, False],
+
+ [2, 2, simulate_simplpedpop, False, False],
+ [2, 2, simulate_simplpedpop, False, True],
+ [2, 2, simulate_encpedpop, False, False],
+ [2, 2, simulate_encpedpop, False, True],
+ [2, 2, simulate_chilldkg, True, False],
+ [2, 2, simulate_chilldkg, True, True],
+ [2, 2, simulate_chilldkg_full, True, False],
+
+ [2, 3, simulate_simplpedpop, False, False],
+ [2, 3, simulate_simplpedpop, False, True],
+ [2, 3, simulate_encpedpop, False, False],
+ [2, 3, simulate_encpedpop, False, True],
+ [2, 3, simulate_chilldkg, True, False],
+ [2, 3, simulate_chilldkg, True, True],
+ [2, 3, simulate_chilldkg_full, True, False],
+
+ [2, 5, simulate_simplpedpop, False, False],
+ [2, 5, simulate_simplpedpop, False, True],
+ [2, 5, simulate_encpedpop, False, False],
+ [2, 5, simulate_encpedpop, False, True],
+ [2, 5, simulate_chilldkg, True, False],
+ [2, 5, simulate_chilldkg, True, True],
+ [2, 5, simulate_chilldkg_full, True, False],
+])
+def test_correctness(t, n, simulate_dkg, recovery, investigation):
+ seeds = [None] + [random_bytes(32) for _ in range(n)]
+
+ rets = simulate_dkg(seeds[1:], t, investigation=investigation)
+ if investigation:
+ assert rets is None
+ # The session has failed correctly, so there's nothing further to check.
+ return
+
+ # rets[0] are the return values from the coordinator
+ # rets[1 : n + 1] are from the participants
+ assert len(rets) == n + 1
+ dkg_outputs = [ret[0] for ret in rets]
+ check_correctness_dkg_output(t, n, dkg_outputs)
+
+ eqs_or_recs = [ret[1] for ret in rets]
+ for i in range(1, n + 1):
+ assert eqs_or_recs[0] == eqs_or_recs[i]
+
+ if recovery:
+ rec = eqs_or_recs[0]
+ # Check correctness of chilldkg.recover
+ for i in range(n + 1):
+ (secshare, threshold_pubkey, pubshares), _ = chilldkg.recover(seeds[i], rec)
+ assert secshare == dkg_outputs[i][0]
+ assert threshold_pubkey == dkg_outputs[i][1]
+ assert pubshares == dkg_outputs[i][2]
diff --git a/test/jmfrost/test_frost_ref.py b/test/jmfrost/test_frost_ref.py
new file mode 100755
index 0000000..ff58071
--- /dev/null
+++ b/test/jmfrost/test_frost_ref.py
@@ -0,0 +1,505 @@
+# -*- coding: utf-8 -*-
+
+import json
+import os
+import sys
+
+from .trusted_keygen import trusted_dealer_keygen
+
+
+def fromhex_all(l):
+ return [bytes.fromhex(l_i) for l_i in l]
+
+# Check that calling `try_fn` raises a `exception`. If `exception` is raised,
+# examine it with `except_fn`.
+def assert_raises(exception, try_fn, except_fn):
+ raised = False
+ try:
+ try_fn()
+ except exception as e:
+ raised = True
+ assert(except_fn(e))
+ except BaseException:
+ raise AssertionError("Wrong exception raised in a test.")
+ if not raised:
+ raise AssertionError("Exception was _not_ raised in a test where it was required.")
+
+def get_error_details(test_case):
+ error = test_case["error"]
+ if error["type"] == "invalid_contribution":
+ exception = InvalidContributionError
+ if "contrib" in error:
+ except_fn = lambda e: e.id == error["signer_id"] and e.contrib == error["contrib"]
+ else:
+ except_fn = lambda e: e.id == error["signer_id"]
+ elif error["type"] == "value":
+ exception = ValueError
+ # except_fn = except_fn1
+ except_fn = lambda e: str(e) == error["message"]
+ else:
+ raise RuntimeError(f"Invalid error type: {error['type']}")
+ return exception, except_fn
+
+def generate_frost_keys(max_participants: int, min_participants: int) -> Tuple[PlainPk, List[bytes], List[bytes], List[PlainPk]]:
+ if not (2 <= min_participants <= max_participants):
+ raise ValueError('values must satisfy: 2 <= min_participants <= max_participants')
+
+ secret = secrets.randbelow(n - 1) + 1
+ P, secshares, pubshares = trusted_dealer_keygen(secret, max_participants, min_participants)
+
+ group_pk = PlainPk(cbytes(P))
+ ser_identifiers = [bytes_from_int(secshare_i[0]) for secshare_i in secshares]
+ ser_secshares = [bytes_from_int(secshare_i[1]) for secshare_i in secshares]
+ ser_pubshares = [PlainPk(cbytes(pubshare_i)) for pubshare_i in pubshares]
+ return (group_pk, ser_identifiers, ser_secshares, ser_pubshares)
+
+def test_keygen_vectors():
+ with open(os.path.join(sys.path[0], 'vectors', 'keygen_vectors.json')) as f:
+ test_data = json.load(f)
+
+ valid_test_cases = test_data["valid_test_cases"]
+ for test_case in valid_test_cases:
+ max_participants = test_case["max_participants"]
+ min_participants = test_case["min_participants"]
+ group_pk = bytes.fromhex(test_case["group_public_key"])
+ # assert the length using min & max participants?
+ ids = [bytes_from_int(i) for i in test_case["participant_identifiers"]]
+ pubshares = fromhex_all(test_case["participant_pubshares"])
+ secshares = fromhex_all(test_case["participant_secshares"])
+
+ assert check_frost_key_compatibility(max_participants, min_participants, group_pk, ids, secshares, pubshares) == True
+
+ pubshare_fail_test_cases = test_data["pubshare_correctness_fail_test_cases"]
+ for test_case in pubshare_fail_test_cases:
+ pubshares = fromhex_all(test_case["participant_pubshares"])
+ secshares = fromhex_all(test_case["participant_secshares"])
+
+ assert check_pubshares_correctness(secshares, pubshares) == False
+
+ group_pubkey_fail_test_cases = test_data["group_pubkey_correctness_fail_test_cases"]
+ for test_case in group_pubkey_fail_test_cases:
+ max_participants = test_case["max_participants"]
+ min_participants = test_case["min_participants"]
+ group_pk = bytes.fromhex(test_case["group_public_key"])
+ ids = [bytes_from_int(i) for i in test_case["participant_identifiers"]]
+ pubshares = fromhex_all(test_case["participant_pubshares"])
+ secshares = fromhex_all(test_case["participant_secshares"])
+
+ assert check_group_pubkey_correctness(min_participants, group_pk, ids, pubshares) == False
+
+def test_nonce_gen_vectors():
+ with open(os.path.join(sys.path[0], 'vectors', 'nonce_gen_vectors.json')) as f:
+ test_data = json.load(f)
+
+ for test_case in test_data["test_cases"]:
+ def get_value(key) -> bytes:
+ return bytes.fromhex(test_case[key])
+
+ def get_value_maybe(key) -> Optional[bytes]:
+ if test_case[key] is not None:
+ return get_value(key)
+ else:
+ return None
+
+ rand_ = get_value("rand_")
+ secshare = get_value_maybe("secshare")
+ pubshare = get_value_maybe("pubshare")
+ if pubshare is not None:
+ pubshare = PlainPk(pubshare)
+ group_pk = get_value_maybe("group_pk")
+ if group_pk is not None:
+ group_pk = XonlyPk(group_pk)
+ msg = get_value_maybe("msg")
+ extra_in = get_value_maybe("extra_in")
+ expected_secnonce = get_value("expected_secnonce")
+ expected_pubnonce = get_value("expected_pubnonce")
+
+ assert nonce_gen_internal(rand_, secshare, pubshare, group_pk, msg, extra_in) == (expected_secnonce, expected_pubnonce)
+
+def test_nonce_agg_vectors():
+ with open(os.path.join(sys.path[0], 'vectors', 'nonce_agg_vectors.json')) as f:
+ test_data = json.load(f)
+
+ pubnonces_list = fromhex_all(test_data["pubnonces"])
+ valid_test_cases = test_data["valid_test_cases"]
+ error_test_cases = test_data["error_test_cases"]
+
+ for test_case in valid_test_cases:
+ #todo: assert the min_participants <= len(pubnonces, ids) <= max_participants
+ #todo: assert the values of ids too? 1 <= id <= max_participants?
+ pubnonces = [pubnonces_list[i] for i in test_case["pubnonce_indices"]]
+ ids = [bytes_from_int(i) for i in test_case["participant_identifiers"]]
+ expected_aggnonce = bytes.fromhex(test_case["expected_aggnonce"])
+ assert nonce_agg(pubnonces, ids) == expected_aggnonce
+
+ for test_case in error_test_cases:
+ exception, except_fn = get_error_details(test_case)
+ pubnonces = [pubnonces_list[i] for i in test_case["pubnonce_indices"]]
+ ids = [bytes_from_int(i) for i in test_case["participant_identifiers"]]
+ assert_raises(exception, lambda: nonce_agg(pubnonces, ids), except_fn)
+
+# todo: include vectors from the frost draft too
+# todo: add a test where group_pk is even (might need to modify json file)
+def test_sign_verify_vectors():
+ with open(os.path.join(sys.path[0], 'vectors', 'sign_verify_vectors.json')) as f:
+ test_data = json.load(f)
+
+ max_participants = test_data["max_participants"]
+ min_participants = test_data["min_participants"]
+ group_pk = XonlyPk(bytes.fromhex(test_data["group_public_key"]))
+ secshare_p1 = bytes.fromhex(test_data["secshare_p1"])
+ ids = test_data["identifiers"]
+ pubshares = fromhex_all(test_data["pubshares"])
+ # The public key corresponding to the first participant (secshare_p1) is at index 0
+ assert pubshares[0] == individual_pk(secshare_p1)
+
+ secnonces_p1 = fromhex_all(test_data["secnonces_p1"])
+ pubnonces = fromhex_all(test_data["pubnonces"])
+ # The public nonce corresponding to first participant (secnonce_p1[0]) is at index 0
+ k_1 = int_from_bytes(secnonces_p1[0][0:32])
+ k_2 = int_from_bytes(secnonces_p1[0][32:64])
+ R_s1 = point_mul(G, k_1)
+ R_s2 = point_mul(G, k_2)
+ assert R_s1 is not None and R_s2 is not None
+ assert pubnonces[0] == cbytes(R_s1) + cbytes(R_s2)
+
+ aggnonces = fromhex_all(test_data["aggnonces"])
+ msgs = fromhex_all(test_data["msgs"])
+
+ valid_test_cases = test_data["valid_test_cases"]
+ sign_error_test_cases = test_data["sign_error_test_cases"]
+ verify_fail_test_cases = test_data["verify_fail_test_cases"]
+ verify_error_test_cases = test_data["verify_error_test_cases"]
+
+ for test_case in valid_test_cases:
+ ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]]
+ pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]]
+ pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]]
+ aggnonce_tmp = aggnonces[test_case["aggnonce_index"]]
+ # Make sure that pubnonces and aggnonce in the test vector are consistent
+ assert nonce_agg(pubnonces_tmp, ids_tmp) == aggnonce_tmp
+ msg = msgs[test_case["msg_index"]]
+ signer_index = test_case["signer_index"]
+ my_id = ids_tmp[signer_index]
+ expected = bytes.fromhex(test_case["expected"])
+
+ session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, [], [], msg)
+ # WARNING: An actual implementation should _not_ copy the secnonce.
+ # Reusing the secnonce, as we do here for testing purposes, can leak the
+ # secret key.
+ secnonce_tmp = bytearray(secnonces_p1[0])
+ assert sign(secnonce_tmp, secshare_p1, my_id, session_ctx) == expected
+ assert partial_sig_verify(expected, ids_tmp, pubnonces_tmp, pubshares_tmp, [], [], msg, signer_index)
+
+ for test_case in sign_error_test_cases:
+ exception, except_fn = get_error_details(test_case)
+ ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]]
+ pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]]
+ aggnonce_tmp = aggnonces[test_case["aggnonce_index"]]
+ msg = msgs[test_case["msg_index"]]
+ signer_index = test_case["signer_index"]
+ my_id = bytes_from_int(test_case["signer_id"]) if signer_index is None else ids_tmp[signer_index]
+ secnonce_tmp = bytearray(secnonces_p1[test_case["secnonce_index"]])
+
+ session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, [], [], msg)
+ assert_raises(exception, lambda: sign(secnonce_tmp, secshare_p1, my_id, session_ctx), except_fn)
+
+ for test_case in verify_fail_test_cases:
+ psig = bytes.fromhex(test_case["psig"])
+ ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]]
+ pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]]
+ pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]]
+ msg = msgs[test_case["msg_index"]]
+ signer_index = test_case["signer_index"]
+
+ assert not partial_sig_verify(psig, ids_tmp, pubnonces_tmp, pubshares_tmp, [], [], msg, signer_index)
+
+ for test_case in verify_error_test_cases:
+ exception, except_fn = get_error_details(test_case)
+
+ psig = bytes.fromhex(test_case["psig"])
+ ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]]
+ pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]]
+ pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]]
+ msg = msgs[test_case["msg_index"]]
+ signer_index = test_case["signer_index"]
+ assert_raises(exception, lambda: partial_sig_verify(psig, ids_tmp, pubnonces_tmp, pubshares_tmp, [], [], msg, signer_index), except_fn)
+
+def test_tweak_vectors():
+ with open(os.path.join(sys.path[0], 'vectors', 'tweak_vectors.json')) as f:
+ test_data = json.load(f)
+
+ max_participants = test_data["max_participants"]
+ min_participants = test_data["min_participants"]
+ group_pk = XonlyPk(bytes.fromhex(test_data["group_public_key"]))
+ secshare_p1 = bytes.fromhex(test_data["secshare_p1"])
+ ids = test_data["identifiers"]
+ pubshares = fromhex_all(test_data["pubshares"])
+ # The public key corresponding to the first participant (secshare_p1) is at index 0
+ assert pubshares[0] == individual_pk(secshare_p1)
+
+ secnonce_p1 = bytearray(bytes.fromhex(test_data["secnonce_p1"]))
+ pubnonces = fromhex_all(test_data["pubnonces"])
+ # The public nonce corresponding to first participant (secnonce_p1[0]) is at index 0
+ k_1 = int_from_bytes(secnonce_p1[0:32])
+ k_2 = int_from_bytes(secnonce_p1[32:64])
+ R_s1 = point_mul(G, k_1)
+ R_s2 = point_mul(G, k_2)
+ assert R_s1 is not None and R_s2 is not None
+ assert pubnonces[0] == cbytes(R_s1) + cbytes(R_s2)
+
+ aggnonces = fromhex_all(test_data["aggnonces"])
+ tweaks = fromhex_all(test_data["tweaks"])
+
+ msg = bytes.fromhex(test_data["msg"])
+
+ valid_test_cases = test_data["valid_test_cases"]
+ error_test_cases = test_data["error_test_cases"]
+
+ for test_case in valid_test_cases:
+ ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]]
+ pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]]
+ pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]]
+ aggnonce_tmp = aggnonces[test_case["aggnonce_index"]]
+ # Make sure that pubnonces and aggnonce in the test vector are consistent
+ assert nonce_agg(pubnonces_tmp, ids_tmp) == aggnonce_tmp
+ tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]]
+ tweak_modes_tmp = test_case["is_xonly"]
+ signer_index = test_case["signer_index"]
+ my_id = ids_tmp[signer_index]
+ expected = bytes.fromhex(test_case["expected"])
+
+ session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg)
+ # WARNING: An actual implementation should _not_ copy the secnonce.
+ # Reusing the secnonce, as we do here for testing purposes, can leak the
+ # secret key.
+ secnonce_tmp = bytearray(secnonce_p1)
+ assert sign(secnonce_tmp, secshare_p1, my_id, session_ctx) == expected
+ assert partial_sig_verify(expected, ids_tmp, pubnonces_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg, signer_index)
+
+ for test_case in error_test_cases:
+ exception, except_fn = get_error_details(test_case)
+ ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]]
+ pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]]
+ aggnonce_tmp = aggnonces[test_case["aggnonce_index"]]
+ tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]]
+ tweak_modes_tmp = test_case["is_xonly"]
+ signer_index = test_case["signer_index"]
+ my_id = ids_tmp[signer_index]
+
+ session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg)
+ assert_raises(exception, lambda: sign(secnonce_p1, secshare_p1, my_id, session_ctx), except_fn)
+
+def test_det_sign_vectors():
+ with open(os.path.join(sys.path[0], 'vectors', 'det_sign_vectors.json')) as f:
+ test_data = json.load(f)
+
+ max_participants = test_data["max_participants"]
+ min_participants = test_data["min_participants"]
+ group_pk = XonlyPk(bytes.fromhex(test_data["group_public_key"]))
+ secshare_p1 = bytes.fromhex(test_data["secshare_p1"])
+ ids = test_data["identifiers"]
+ pubshares = fromhex_all(test_data["pubshares"])
+ # The public key corresponding to the first participant (secshare_p1) is at index 0
+ assert pubshares[0] == individual_pk(secshare_p1)
+
+ msgs = fromhex_all(test_data["msgs"])
+
+ valid_test_cases = test_data["valid_test_cases"]
+ sign_error_test_cases = test_data["sign_error_test_cases"]
+
+ for test_case in valid_test_cases:
+ ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]]
+ pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]]
+ aggothernonce = bytes.fromhex(test_case["aggothernonce"])
+ tweaks = fromhex_all(test_case["tweaks"])
+ is_xonly = test_case["is_xonly"]
+ msg = msgs[test_case["msg_index"]]
+ signer_index = test_case["signer_index"]
+ my_id = ids_tmp[signer_index]
+ rand = bytes.fromhex(test_case["rand"]) if test_case["rand"] is not None else None
+ expected = fromhex_all(test_case["expected"])
+
+ pubnonce, psig = deterministic_sign(secshare_p1, my_id, aggothernonce, ids_tmp, pubshares_tmp, tweaks, is_xonly, msg, rand)
+ assert pubnonce == expected[0]
+ assert psig == expected[1]
+
+ pubnonces = [aggothernonce, pubnonce]
+ aggnonce_tmp = nonce_agg(pubnonces, [AGGREGATOR_ID, my_id])
+ session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, tweaks, is_xonly, msg)
+ assert partial_sig_verify_internal(psig, my_id, pubnonce, pubshares_tmp[signer_index], session_ctx)
+
+ for test_case in sign_error_test_cases:
+ exception, except_fn = get_error_details(test_case)
+ ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]]
+ pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]]
+ aggothernonce = bytes.fromhex(test_case["aggothernonce"])
+ tweaks = fromhex_all(test_case["tweaks"])
+ is_xonly = test_case["is_xonly"]
+ msg = msgs[test_case["msg_index"]]
+ signer_index = test_case["signer_index"]
+ my_id = bytes_from_int(test_case["signer_id"]) if signer_index is None else ids_tmp[signer_index]
+ rand = bytes.fromhex(test_case["rand"]) if test_case["rand"] is not None else None
+
+ try_fn = lambda: deterministic_sign(secshare_p1, my_id, aggothernonce, ids_tmp, pubshares_tmp, tweaks, is_xonly, msg, rand)
+ assert_raises(exception, try_fn, except_fn)
+
+def test_sig_agg_vectors():
+ with open(os.path.join(sys.path[0], 'vectors', 'sig_agg_vectors.json')) as f:
+ test_data = json.load(f)
+
+ max_participants = test_data["max_participants"]
+ min_participants = test_data["min_participants"]
+ group_pk = XonlyPk(bytes.fromhex(test_data["group_public_key"]))
+ ids = test_data["identifiers"]
+ pubshares = fromhex_all(test_data["pubshares"])
+ # These nonces are only required if the tested API takes the individual
+ # nonces and not the aggregate nonce.
+ pubnonces = fromhex_all(test_data["pubnonces"])
+
+ tweaks = fromhex_all(test_data["tweaks"])
+ psigs = fromhex_all(test_data["psigs"])
+ msg = bytes.fromhex(test_data["msg"])
+
+ valid_test_cases = test_data["valid_test_cases"]
+ error_test_cases = test_data["error_test_cases"]
+
+ for test_case in valid_test_cases:
+ ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]]
+ pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]]
+ pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]]
+ aggnonce_tmp = bytes.fromhex(test_case["aggnonce"])
+ # Make sure that pubnonces and aggnonce in the test vector are consistent
+ assert aggnonce_tmp == nonce_agg(pubnonces_tmp, ids_tmp)
+
+ tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]]
+ tweak_modes_tmp = test_case["is_xonly"]
+ psigs_tmp = [psigs[i] for i in test_case["psig_indices"]]
+ expected = bytes.fromhex(test_case["expected"])
+
+ session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg)
+ # Make sure that the partial signatures in the test vector are consistent. The tested API takes only aggnonce (not pubnonces list), this check can be ignored
+ for i in range(len(ids_tmp)):
+ partial_sig_verify(psigs_tmp[i], ids_tmp, pubnonces_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg, i)
+
+ bip340sig = partial_sig_agg(psigs_tmp, ids_tmp, session_ctx)
+ assert bip340sig == expected
+ tweaked_group_pk = get_xonly_pk(group_pubkey_and_tweak(pubshares_tmp, ids_tmp, tweaks_tmp, tweak_modes_tmp))
+ assert schnorr_verify(msg, tweaked_group_pk, bip340sig)
+
+ for test_case in error_test_cases:
+ exception, except_fn = get_error_details(test_case)
+
+ ids_tmp = [bytes_from_int(ids[i]) for i in test_case["id_indices"]]
+ pubshares_tmp = [PlainPk(pubshares[i]) for i in test_case["pubshare_indices"]]
+ pubnonces_tmp = [pubnonces[i] for i in test_case["pubnonce_indices"]]
+ aggnonce_tmp = bytes.fromhex(test_case["aggnonce"])
+
+ tweaks_tmp = [tweaks[i] for i in test_case["tweak_indices"]]
+ tweak_modes_tmp = test_case["is_xonly"]
+ psigs_tmp = [psigs[i] for i in test_case["psig_indices"]]
+
+ session_ctx = SessionContext(aggnonce_tmp, ids_tmp, pubshares_tmp, tweaks_tmp, tweak_modes_tmp, msg)
+ assert_raises(exception, lambda: partial_sig_agg(psigs_tmp, ids_tmp, session_ctx), except_fn)
+
+def test_sign_and_verify_random(iterations: int) -> None:
+ for itr in range(iterations):
+ secure_rng = secrets.SystemRandom()
+ # randomly choose a number: 2 <= number <= 10
+ max_participants = secure_rng.randrange(2, 11)
+ # randomly choose a number: 2 <= number <= max_participants
+ min_participants = secure_rng.randrange(2, max_participants + 1)
+
+ group_pk, ids, secshares, pubshares = generate_frost_keys(max_participants, min_participants)
+ assert len(ids) == len(secshares) == len(pubshares) == max_participants
+ assert check_frost_key_compatibility(max_participants, min_participants, group_pk, ids, secshares, pubshares)
+
+ # randomly choose the signer set, with len: min_participants <= len <= max_participants
+ signer_count = secure_rng.randrange(min_participants, max_participants + 1)
+ signer_indices = secure_rng.sample(range(max_participants), signer_count)
+ assert len(set(signer_indices)) == signer_count # signer set must not contain duplicate ids
+
+ signer_ids = [ids[i] for i in signer_indices]
+ signer_pubshares = [pubshares[i] for i in signer_indices]
+ # NOTE: secret values MUST NEVER BE COPIED!!!
+ # we do it here to improve the code readability
+ signer_secshares = [secshares[i] for i in signer_indices]
+
+
+ # In this example, the message and group pubkey are known
+ # before nonce generation, so they can be passed into the nonce
+ # generation function as a defense-in-depth measure to protect
+ # against nonce reuse.
+ #
+ # If these values are not known when nonce_gen is called, empty
+ # byte arrays can be passed in for the corresponding arguments
+ # instead.
+ msg = secrets.token_bytes(32)
+ v = secrets.randbelow(4)
+ tweaks = [secrets.token_bytes(32) for _ in range(v)]
+ tweak_modes = [secrets.choice([False, True]) for _ in range(v)]
+ tweaked_group_pk = get_xonly_pk(group_pubkey_and_tweak(signer_pubshares, signer_ids, tweaks, tweak_modes))
+
+ signer_secnonces = []
+ signer_pubnonces = []
+ for i in range(signer_count - 1):
+ # Use a clock for extra_in
+ t = time.clock_gettime_ns(time.CLOCK_MONOTONIC)
+ secnonce_i, pubnonce_i = nonce_gen(signer_secshares[i], signer_pubshares[i], tweaked_group_pk, msg, t.to_bytes(8, 'big'))
+ signer_secnonces.append(secnonce_i)
+ signer_pubnonces.append(pubnonce_i)
+
+ # On even iterations use regular signing algorithm for the final signer,
+ # otherwise use deterministic signing algorithm
+ if itr % 2 == 0:
+ t = time.clock_gettime_ns(time.CLOCK_MONOTONIC)
+ secnonce_final, pubnonce_final = nonce_gen(signer_secshares[-1], signer_pubshares[-1], tweaked_group_pk, msg, t.to_bytes(8, 'big'))
+ signer_secnonces.append(secnonce_final)
+ else:
+ aggothernonce = nonce_agg(signer_pubnonces, signer_ids[:-1])
+ rand = secrets.token_bytes(32)
+ pubnonce_final, psig_final = deterministic_sign(signer_secshares[-1], signer_ids[-1], aggothernonce, signer_ids, signer_pubshares, tweaks, tweak_modes, msg, rand)
+
+ signer_pubnonces.append(pubnonce_final)
+ aggnonce = nonce_agg(signer_pubnonces, signer_ids)
+ session_ctx = SessionContext(aggnonce, signer_ids, signer_pubshares, tweaks, tweak_modes, msg)
+
+ signer_psigs = []
+ for i in range(signer_count):
+ if itr % 2 != 0 and i == signer_count - 1:
+ psig_i = psig_final # last signer would have already deterministically signed
+ else:
+ psig_i = sign(signer_secnonces[i], signer_secshares[i], signer_ids[i], session_ctx)
+ assert partial_sig_verify(psig_i, signer_ids, signer_pubnonces, signer_pubshares, tweaks, tweak_modes, msg, i)
+ signer_psigs.append(psig_i)
+
+ # An exception is thrown if secnonce is accidentally reused
+ assert_raises(ValueError, lambda: sign(signer_secnonces[0], signer_secshares[0], signer_ids[0], session_ctx), lambda e: True)
+
+ # Wrong signer index
+ assert not partial_sig_verify(signer_psigs[0], signer_ids, signer_pubnonces, signer_pubshares, tweaks, tweak_modes, msg, 1)
+ # Wrong message
+ assert not partial_sig_verify(signer_psigs[0], signer_ids, signer_pubnonces, signer_pubshares, tweaks, tweak_modes, secrets.token_bytes(32), 0)
+
+ bip340sig = partial_sig_agg(signer_psigs, signer_ids, session_ctx)
+ assert schnorr_verify(msg, tweaked_group_pk, bip340sig)
+
+def run_test(test_name, test_func):
+ max_len = 30
+ test_name = test_name.ljust(max_len, ".")
+ print(f"Running {test_name}...", end="", flush=True)
+ try:
+ test_func()
+ print("Passed!")
+ except Exception as e:
+ print(f"Failed :'(\nError: {e}")
+
+if __name__ == '__main__':
+ run_test("test_keygen_vectors", test_keygen_vectors)
+ run_test("test_nonce_gen_vectors", test_nonce_gen_vectors)
+ run_test("test_nonce_agg_vectors", test_nonce_agg_vectors)
+ run_test("test_sign_verify_vectors", test_sign_verify_vectors)
+ run_test("test_tweak_vectors", test_tweak_vectors)
+ run_test("test_det_sign_vectors", test_det_sign_vectors)
+ run_test("test_sig_agg_vectors", test_sig_agg_vectors)
+ run_test("test_sign_and_verify_random", lambda: test_sign_and_verify_random(6))
diff --git a/test/jmfrost/vectors/det_sign_vectors.json b/test/jmfrost/vectors/det_sign_vectors.json
new file mode 100644
index 0000000..aa02e90
--- /dev/null
+++ b/test/jmfrost/vectors/det_sign_vectors.json
@@ -0,0 +1,283 @@
+{
+ "max_participants": 5,
+ "min_participants": 3,
+ "group_public_key": "037940B3ED1FDC360252A6F48058C7B94276DFB6AA2B7D51706FB48326B19E7AE1",
+ "secshare_p1":"81D0D40CDF044588167A987C14552954DB187AC5AD3B1CA40D7B03DCA32AFDFB",
+ "identifiers": [1, 2, 3, 4, 5],
+ "pubshares": [
+ "02BB66437FCAA01292BFB4BB6F19D67818FE693215C36C4663857F1DC8AB8BF4FA",
+ "02C3250013C86AA9C3011CD40B2658CBC5B950DD21FFAA4EDE1BB66E18A063CED5",
+ "03259D7068335012C08C5D80E181969ED7FFA08F7973E3ED9C8C0BFF3EC03C223E",
+ "02A22971750242F6DA35B8DB0DFE74F38A3227118B296ADD2C65E324E2B7EB20AD",
+ "03541293535BB662F8294C4BEB7EA25F55FEAE86C6BAE0CEBD741EAAA28639A6E6",
+ "020000000000000000000000000000000000000000000000000000000000000007"
+ ],
+ "msgs": [
+ "F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF",
+ "",
+ "2626262626262626262626262626262626262626262626262626262626262626262626262626"
+ ],
+ "valid_test_cases": [
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 0,
+ "expected": [
+ "038E14A90FB2C66535B42850F009E2F1857000433042EE647066034FDE7F5A3F3C026CD7BDD51BE1490486F1E905B90020CB8294AFE7B6A051069C07D3B2FD9DC12A",
+ "89FA301CA35D6BD839089D0EBA7EA16B2C90818103BAA85F92FE6C01F0E0FB61"
+ ],
+ "comment": "Signing with minimum number of participants"
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [1, 0, 2],
+ "pubshare_indices": [1, 0, 2],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 1,
+ "expected": [
+ "038E14A90FB2C66535B42850F009E2F1857000433042EE647066034FDE7F5A3F3C026CD7BDD51BE1490486F1E905B90020CB8294AFE7B6A051069C07D3B2FD9DC12A",
+ "89FA301CA35D6BD839089D0EBA7EA16B2C90818103BAA85F92FE6C01F0E0FB61"
+ ],
+ "comment": "Partial-signature shouldn't change if the order of signers set changes. Note: The deterministic sign will generate the same secnonces due to unchanged parameters"
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [2, 1, 0],
+ "pubshare_indices": [2, 1, 0],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 2,
+ "expected": [
+ "038E14A90FB2C66535B42850F009E2F1857000433042EE647066034FDE7F5A3F3C026CD7BDD51BE1490486F1E905B90020CB8294AFE7B6A051069C07D3B2FD9DC12A",
+ "89FA301CA35D6BD839089D0EBA7EA16B2C90818103BAA85F92FE6C01F0E0FB61"
+ ],
+ "comment": "Partial-signature shouldn't change if the order of signers set changes. Note: The deterministic sign will generate the same secnonces due to unchanged parameters"
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [0, 3, 4],
+ "pubshare_indices": [0, 3, 4],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 0,
+ "expected": [
+ "038E14A90FB2C66535B42850F009E2F1857000433042EE647066034FDE7F5A3F3C026CD7BDD51BE1490486F1E905B90020CB8294AFE7B6A051069C07D3B2FD9DC12A",
+ "E5C27E441A9D433CDC4A36F669967E4304435CE5E6E7722D871237C3B4A2EC99"
+ ],
+ "comment": "Partial-signature changes if the members of signers set changes"
+ },
+ {
+ "rand": null,
+ "aggothernonce": "02D26EF7E09A4BC0A2CF295720C64BAD56A28EF50B6BECBD59AF6F3ADE6C2480C503D11B9993AE4C2D38EA2591287F7B744976F0F0B79104B96D6399507FC533E893",
+ "id_indices": [0, 1, 2, 3],
+ "pubshare_indices": [0, 1, 2, 3],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 0,
+ "expected": [
+ "02EEE6300500FB508012424A0F47621F9A844A939020DD64C4254939D848B675A5037BDEA362CBE55D6D36A7635FC21ED8AC2FA05E9B63A8242E07969F6E2D4E36E5",
+ "97440C51FCB602911455E6147938F5B81C0C1AF32ADAFD98F5A66A4616289D5D"
+ ],
+ "comment": "Signing without auxiliary randomness"
+ },
+ {
+ "rand": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF",
+ "aggothernonce": "03C7E3D6456228347B658911BF612967F36C7791C24F9607ADB34E09F8CC1126D803D2C9C6E3D1A11463F8C2D57B145A814F5D44FD1A42F7A024140AC30D48EE0BEE",
+ "id_indices": [0, 1, 2, 3, 4],
+ "pubshare_indices": [0, 1, 2, 3, 4],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 0,
+ "expected": [
+ "020EBAD8A2F6099A0A0A62439F0A2A0E7DF6918DDE55183AFFF112DF2940FF76DE026C4A1C132CF16CFCFC28FEB02651C44719C900DF6F16407711CA8DB31E2A46B8",
+ "83271933ECB71C566F3BA61A645B1396251CBF7EDA77B1D2AF5C689003AB631B"
+ ],
+ "comment": "Signing with maximum number of participants and maximum auxiliary randomness value"
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 1,
+ "signer_index": 0,
+ "expected": [
+ "0203375B47194F99B8B682B9DCDFB972A066C243BC7AA951A792FF02A707A3C7870367C40EE43583D0FC0F44696BED09D9B89652FC45B738FF03AF8ECA854A5424B1",
+ "2D2F6A697B0632291E3240D9E48F82A454EEB9F566987CB5E7612C0B75D41208"
+ ],
+ "comment": "Empty message"
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 2,
+ "signer_index": 0,
+ "expected": [
+ "0256B5FD4623C09A0E073CE04FF488DA0C4319A528CBA3FC26307682AD2CAD069003F8E94981F0D4A0A879CFAEEE0A060DF1E12889FB7C3CEAC498310827F63CBDE2",
+ "347C67E959FCA9460F907C0D2CAF5DD427E5CFD7E15330BA38DA6E986ED91B0E"
+ ],
+ "comment": "Message longer than 32 bytes (38-byte msg)"
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "tweaks": ["E8F791FF9225A2AF0102AFFF4A9A723D9612A682A25EBE79802B263CDFCD83BB"],
+ "is_xonly": [true],
+ "msg_index": 0,
+ "signer_index": 0,
+ "expected": [
+ "0341E28C13AB55A689C4698F31AD68250636B9E41FACCB0D358B4BD9A3DF09B1920311E0CED48F4B3B51E010159D3657FD8EC9DFF1FD30AD28FC126F62AA1C53C451",
+ "817169757CF62879BCB2F1DFE895E6781664CA0D18534290C22EC0E40187B7FC"
+ ],
+ "comment": "Tweaked group public key"
+ }
+ ],
+ "sign_error_test_cases": [
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [3, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": null,
+ "signer_id": 1,
+ "error": {
+ "type": "value",
+ "message": "The signer's id must be present in the participant identifier list."
+ },
+ "comment": "The signer's id is not in the participant identifier list."
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [0, 1, 2, 1],
+ "pubshare_indices": [0, 1, 2, 1],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 0,
+ "error": {
+ "type": "value",
+ "message": "The participant identifier list must contain unique elements."
+ },
+ "comment": "The participant identifier list contains duplicate elements."
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [3, 1, 2],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 0,
+ "error": {
+ "type": "value",
+ "message": "The signer's pubshare must be included in the list of pubshares."
+ },
+ "comment": "The signer's pubshare is not in the list of pubshares. This test case is optional: it can be skipped by implementations that do not check that the signer's pubshare is included in the list of pubshares."
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 0,
+ "error": {
+ "type": "value",
+ "message": "The pubshares and ids arrays must have the same length."
+ },
+ "comment": "The participant identifiers count exceed the participant public shares count"
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 5],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 0,
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": 3,
+ "contrib": "pubshare"
+ },
+ "comment": "Signer 3 provided an invalid participant public share"
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "048465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61037496A3CC86926D452CAFCFD55D25972CA1675D549310DE296BFF42F72EEEA8C9",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 0,
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": null,
+ "contrib": "aggothernonce"
+ },
+ "comment": "aggothernonce is invalid due wrong tag, 0x04, in the first half"
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "0000000000000000000000000000000000000000000000000000000000000000000287BF891D2A6DEAEBADC909352AA9405D1428C15F4B75F04DAE642A95C2548480",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "tweaks": [],
+ "is_xonly": [],
+ "msg_index": 0,
+ "signer_index": 0,
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": null,
+ "contrib": "aggothernonce"
+ },
+ "comment": "aggothernonce is invalid because first half corresponds to point at infinity"
+ },
+ {
+ "rand": "0000000000000000000000000000000000000000000000000000000000000000",
+ "aggothernonce": "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "tweaks": ["FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141"],
+ "is_xonly": [false],
+ "msg_index": 0,
+ "signer_index": 0,
+ "error": {
+ "type": "value",
+ "message": "The tweak must be less than n."
+ },
+ "comment": "Tweak is invalid because it exceeds group size"
+ }
+ ]
+}
diff --git a/test/jmfrost/vectors/keygen_vectors.json b/test/jmfrost/vectors/keygen_vectors.json
new file mode 100644
index 0000000..b7c7640
--- /dev/null
+++ b/test/jmfrost/vectors/keygen_vectors.json
@@ -0,0 +1,78 @@
+{
+ "valid_test_cases": [
+ {
+ "max_participants": 3,
+ "min_participants": 2,
+ "group_public_key": "02F37C34B66CED1FB51C34A90BDAE006901F10625CC06C4F64663B0EAE87D87B4F",
+ "participant_identifiers": [1, 2, 3],
+ "participant_pubshares": [
+ "026BAEE4BF7D4B9C4567DFFF6F3C2C76DF5C082E9320CD8187D6AB5965BC5A119A",
+ "03DACC9463E5186F3C81AE1B314F7B09001A22B28BB56AD0ABD3F376818F9604AB",
+ "031404710E938032DB0D4F6A4CD20AE37384BE98BA9FE05B42D139361202B391E6"
+
+ ],
+ "participant_secshares": [
+ "08F89FFE80AC94DCB920C26F3F46140BFC7F95B493F8310F5FC1EA2B01F4254C",
+ "04F0FEAC2EDCEDC6CE1253B7FAB8C86B856A797F44D83D82A385554E6E401984",
+ "00E95D59DD0D46B0E303E500B62B7CCB0E555D49F5B849F5E748C071DA8C0DBC"
+ ]
+ },
+ {
+ "max_participants": 5,
+ "min_participants": 3,
+ "group_public_key": "037940B3ED1FDC360252A6F48058C7B94276DFB6AA2B7D51706FB48326B19E7AE1",
+ "participant_identifiers": [1, 2, 3, 4, 5],
+ "participant_pubshares": [
+ "02BB66437FCAA01292BFB4BB6F19D67818FE693215C36C4663857F1DC8AB8BF4FA",
+ "02C3250013C86AA9C3011CD40B2658CBC5B950DD21FFAA4EDE1BB66E18A063CED5",
+ "03259D7068335012C08C5D80E181969ED7FFA08F7973E3ED9C8C0BFF3EC03C223E",
+ "02A22971750242F6DA35B8DB0DFE74F38A3227118B296ADD2C65E324E2B7EB20AD",
+ "03541293535BB662F8294C4BEB7EA25F55FEAE86C6BAE0CEBD741EAAA28639A6E6"
+
+ ],
+ "participant_secshares": [
+ "81D0D40CDF044588167A987C14552954DB187AC5AD3B1CA40D7B03DCA32AFDFB",
+ "10130412FDB9A10F7DF862CE8763311B7D1B7AACF211ED32272F0DAC49DF6743",
+ "1362A14AE07243C93C24E7EEA3FB8C619338C24925F8E5E488DAE1D3DE7B2236",
+ "8BBFABB4872E2DB5510027DC6A1E3B271D70519A48F006BB327E805360FE2ED4",
+ "792A234FF1ED5ED3BC8A2297D9CB3D6D61134BB9ABAEAF7A64478A9E01324BDC"
+ ]
+ }
+ ],
+ "pubshare_correctness_fail_test_cases": [
+ {
+ "max_participants": 2,
+ "min_participants": 2,
+ "group_public_key": "0256C92CA18AD18E5E14075E4CDA4C9471E1F69EFF06DA31B9DB8C431697457C96",
+ "participant_identifiers": [1, 2],
+ "participant_pubshares": [
+ "02EF27116868EEC72F1AEF13F0383A83479DB7DFBDE55B568ADC0ABC28B0C82AEB",
+ "0381EE46DB9582B6AA84AB1F39CAAD930899B44ACCB75EDFFBB29CDB8E2136F2A7"
+ ],
+ "participant_secshares": [
+ "1903097297A1E0FD75FCBCDB66DC21C65ACEC527100566459F1BBF2FA7388D53",
+ "B9B2CD71F1C09B8D6F675D05CDF1396B28FF626CD8C69B9DF4D3B6BDCB57EFF2"
+ ],
+ "comment": "Invalid pubshare (parity flipped) for participant 2"
+ }
+ ],
+ "group_pubkey_correctness_fail_test_cases": [
+ {
+ "max_participants": 3,
+ "min_participants": 3,
+ "group_public_key": "0354F1E67AAFFB49654AF3EE5B0C68D8CF24468D014453F1F13B5221512A0BCE78",
+ "participant_identifiers": [1, 2, 3],
+ "participant_pubshares": [
+ "037A01FF2705D679CDC34E04366CC3BA95BD9E883AC7E33B640D744BE6BCC2D140",
+ "039E2C0AE44EA1203606D04B711667C07D1695ADC36FBF07DD37B7ECA85490262C",
+ "027C782638AD6A8A95DEDF6CBA940E89E827EC5C4FCF693EAB7D70927C3CA59FDB"
+ ],
+ "participant_secshares": [
+ "A3236A9D6EF252A5C59F17B544ECE39487FFD80F158EB93F8AA4AF707BFA5511",
+ "7FA1BE8BCC29555EFAAC4B19D47E26467E056B9DE2F6E0B7B844940FD43D1047",
+ "8BACD727EA7C2156F476BFC8EF5B332FE0663464AC3F117C0B69D6460A4AD25D"
+ ],
+ "comment": "Invalid group public key (parity flipped)"
+ }
+ ]
+}
\ No newline at end of file
diff --git a/test/jmfrost/vectors/nonce_agg_vectors.json b/test/jmfrost/vectors/nonce_agg_vectors.json
new file mode 100644
index 0000000..4a541ff
--- /dev/null
+++ b/test/jmfrost/vectors/nonce_agg_vectors.json
@@ -0,0 +1,56 @@
+{
+ "pubnonces": [
+ "020151C80F435648DF67A22B749CD798CE54E0321D034B92B709B567D60A42E66603BA47FBC1834437B3212E89A84D8425E7BF12E0245D98262268EBDCB385D50641",
+ "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B833",
+ "020151C80F435648DF67A22B749CD798CE54E0321D034B92B709B567D60A42E6660279BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798",
+ "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60379BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798",
+ "04FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B833",
+ "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A60248C264CDD57D3C24D79990B0F865674EB62A0F9018277A95011B41BFC193B831",
+ "03FF406FFD8ADB9CD29877E4985014F66A59F6CD01C0E88CAA8E5F3166B1F676A602FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC30"
+ ],
+ "valid_test_cases": [
+ {
+ "pubnonce_indices": [0, 1],
+ "participant_identifiers": [1, 2],
+ "expected_aggnonce": "035FE1873B4F2967F52FEA4A06AD5A8ECCBE9D0FD73068012C894E2E87CCB5804B024725377345BDE0E9C33AF3C43C0A29A9249F2F2956FA8CFEB55C8573D0262DC8"
+ },
+ {
+ "pubnonce_indices": [2, 3],
+ "participant_identifiers": [1, 2],
+ "expected_aggnonce": "035FE1873B4F2967F52FEA4A06AD5A8ECCBE9D0FD73068012C894E2E87CCB5804B000000000000000000000000000000000000000000000000000000000000000000",
+ "comment": "Sum of second points encoded in the nonces is point at infinity which is serialized as 33 zero bytes"
+ }
+ ],
+ "error_test_cases": [
+ {
+ "pubnonce_indices": [0, 4],
+ "participant_identifiers": [1, 2],
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": 2,
+ "contrib": "pubnonce"
+ },
+ "comment": "Public nonce from signer 2 is invalid due wrong tag, 0x04, in the first half"
+ },
+ {
+ "pubnonce_indices": [5, 1],
+ "participant_identifiers": [1, 2],
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": 1,
+ "contrib": "pubnonce"
+ },
+ "comment": "Public nonce from signer 1 is invalid because the second half does not correspond to an X coordinate"
+ },
+ {
+ "pubnonce_indices": [6, 1],
+ "participant_identifiers": [1, 2],
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": 1,
+ "contrib": "pubnonce"
+ },
+ "comment": "Public nonce from signer 1 is invalid because second half exceeds field size"
+ }
+ ]
+}
diff --git a/test/jmfrost/vectors/nonce_gen_vectors.json b/test/jmfrost/vectors/nonce_gen_vectors.json
new file mode 100644
index 0000000..05f845c
--- /dev/null
+++ b/test/jmfrost/vectors/nonce_gen_vectors.json
@@ -0,0 +1,48 @@
+{
+ "test_cases": [
+ {
+ "rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F",
+ "secshare": "0202020202020202020202020202020202020202020202020202020202020202",
+ "pubshare": "024D4B6CD1361032CA9BD2AEB9D900AA4D45D9EAD80AC9423374C451A7254D0766",
+ "group_pk": "0707070707070707070707070707070707070707070707070707070707070707",
+ "msg": "0101010101010101010101010101010101010101010101010101010101010101",
+ "extra_in": "0808080808080808080808080808080808080808080808080808080808080808",
+ "expected_secnonce": "6135CE36209DB5E3E7B7A11ADE54D3028D3CFF089DA3C2EC7766921CC4FB3D1BBCD8A7035194A76F43D278C3CD541AEE014663A2251DDE34E8D900EDF1CAA3D9",
+ "expected_pubnonce": "02A5671568FD7AEA35369FE4A32530FD0D0A23D125627BEA374D9FA5676F645A6103EC4E899B1DBEFC08C51F48E3AFA8503759E9ECD3DE674D3C93FD0D92E15E631A",
+ "comment": ""
+ },
+ {
+ "rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F",
+ "secshare": "0202020202020202020202020202020202020202020202020202020202020202",
+ "pubshare": "024D4B6CD1361032CA9BD2AEB9D900AA4D45D9EAD80AC9423374C451A7254D0766",
+ "group_pk": "0707070707070707070707070707070707070707070707070707070707070707",
+ "msg": "",
+ "extra_in": "0808080808080808080808080808080808080808080808080808080808080808",
+ "expected_secnonce": "91EB573A7D57A17F1C7465D7301BCF90915B5731CDA408644819933DA3E366E354C8BF875D966C02C095428B4D2780AC8B929090EEE9AEF5E4DF250533FE9A08",
+ "expected_pubnonce": "0337513529D07800E8D1B7056456223BFA26B0C12C921ADC87114537D4A65E2E390257723240C10831A1DFD0489DAAA7DF204717EA27147DD9361480D984C763D8A2",
+ "comment": "Empty message"
+ },
+ {
+ "rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F",
+ "secshare": "0202020202020202020202020202020202020202020202020202020202020202",
+ "pubshare": "024D4B6CD1361032CA9BD2AEB9D900AA4D45D9EAD80AC9423374C451A7254D0766",
+ "group_pk": "0707070707070707070707070707070707070707070707070707070707070707",
+ "msg": "2626262626262626262626262626262626262626262626262626262626262626262626262626",
+ "extra_in": "0808080808080808080808080808080808080808080808080808080808080808",
+ "expected_secnonce": "379F4A71682AEF59A022272B7226F02A870F6958873726E33906E765AA36C71D70418EE5C83B76A6BC0E84F04F4F3D92DE83994400404EC8AE35CEA0ECD378AF",
+ "expected_pubnonce": "02A5EA11BCF1BC60AA96D1BE0816F373A57FD00991BE6106FD5AB1F6986FAA2BA0030AF6A8479B9C91958F256AEC38339FD25D4F42A073EEB862B42282E00F323A4C",
+ "comment": "38-byte message"
+ },
+ {
+ "rand_": "0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F0F",
+ "secshare": null,
+ "pubshare": null,
+ "group_pk": null,
+ "msg": null,
+ "extra_in": null,
+ "expected_secnonce": "E8E239B64F9A4D2B03508C029EEFC8156A3AD899FD58B15759C93C7DA745C3550FABE3F7CDD361407B97C1353056310D1610D478633C5DDE04DEC4917591D2E5",
+ "expected_pubnonce": "0399059E50AF7B23F89E1ED7B17A7B24F2D746C663057F6C3B696A416C99C7A1070383C53B9CF236EADF8BDFEB1C3E9A188A1A84190687CD67916DF9BC60CD2D80EC",
+ "comment": "Every optional parameter is absent"
+ }
+ ]
+}
diff --git a/test/jmfrost/vectors/sig_agg_vectors.json b/test/jmfrost/vectors/sig_agg_vectors.json
new file mode 100644
index 0000000..a8e2e72
--- /dev/null
+++ b/test/jmfrost/vectors/sig_agg_vectors.json
@@ -0,0 +1,132 @@
+{
+ "max_participants": 5,
+ "min_participants": 3,
+ "group_public_key": "037940B3ED1FDC360252A6F48058C7B94276DFB6AA2B7D51706FB48326B19E7AE1",
+ "identifiers": [1, 2, 3, 4, 5],
+ "pubshares": [
+ "02BB66437FCAA01292BFB4BB6F19D67818FE693215C36C4663857F1DC8AB8BF4FA",
+ "02C3250013C86AA9C3011CD40B2658CBC5B950DD21FFAA4EDE1BB66E18A063CED5",
+ "03259D7068335012C08C5D80E181969ED7FFA08F7973E3ED9C8C0BFF3EC03C223E",
+ "02A22971750242F6DA35B8DB0DFE74F38A3227118B296ADD2C65E324E2B7EB20AD",
+ "03541293535BB662F8294C4BEB7EA25F55FEAE86C6BAE0CEBD741EAAA28639A6E6"
+ ],
+ "pubnonces": [
+ "021AD89905F193EC1CBED15CDD5F4F0E04FF187648390639C88AC291F2F88D407E02FD49462A942948DF6718EEE86FDD858B124375E6A034A4985D19EE95431E9E03",
+ "03A0640E5746CC90EC3EF04F133AF1B79DE67011927A9BA1510B9254E9C8698062037209BB6915B573D2E6394032E508B8285DD498FE8A85971AAB01ACF0C785A56B",
+ "02861EFD258C9099BEF14FA9B3B4E6229595D8200FC72D27F789D4CCC4352BB32B038496DA1C20DFE16D24D20F0374812347EE9CFF06928802C04A2D1B2D369F4628",
+ "0398DD496FFE3C14D2DDFB5D9FD1189DB829301A83C45F2A1DDF07238529F75D1D0233E8FF18899A66276D27AE5CE28A5170EEAAC4F05DEACC8E7DB1C55F8985495F",
+ "03C7B31E363526D04B5D31148EE6B042AF8CC7DFA922A42A69EB78B816D458D0B20257495EC72B1E59FB90A48B036FBD3D9AE4415C49B6171E108185124B99DE56AA"
+ ],
+ "tweaks": [
+ "B511DA492182A91B0FFB9A98020D55F260AE86D7ECBD0399C7383D59A5F2AF7C",
+ "A815FE049EE3C5AAB66310477FBC8BCCCAC2F3395F59F921C364ACD78A2F48DC",
+ "75448A87274B056468B977BE06EB1E9F657577B7320B0A3376EA51FD420D18A8"
+ ],
+ "msg": "599C67EA410D005B9DA90817CF03ED3B1C868E4DA4EDF00A5880B0082C237869",
+ "psigs": [
+ "447D69D4E02693E3F6C04E198F34E89E17D65DC29C92E635E8BFB8D2908DCA6A",
+ "E7E02FDE0FA66D116C0DCF932F7976D611A4D0CF225087C2B8282153E461FA8B",
+ "E84B98E0B132F4049B061A949EF69E3DFBEB3E2712AEE2DEE0C5B6D517860339",
+ "714B7FCF4D3EA2F4BB2B22F786AEBF0C65E1A6E6FBEF04C39B60EAA1CA257CD5",
+ "DA815BBE9D06203D5ADD3AD5D3FE5F0D5405939EFD7EA3FED6DACA9E5449AD80",
+ "8E367AE4000EEEFCEF7F83DA1AC96181DC51BA0D83E0F834F67A0CFD487DBEF7",
+ "9CAB74D0FBCF14D89330D81C85482B8C720DC69899187F3A5432A5856609E92D",
+ "351F38F8B3126944362D9B3F0D83791CF3D623E746B84A58012DF4C9383909EC",
+ "B9ABA5EE2181EDE7A0D3D29DB147741F66B5A8EF3BB6C9CFD1FAD0D98E5A8A93",
+ "A2DF2C5ECB1141E0B55F47711BBDAE491F2F22D967FA1D9569200B7FB0754AD6",
+ "441DFF8E4E0E8368D21BD3DD70F151C7C581EC2B1931B8F041CC8C052FEBF046",
+ "DDC813A7AA07415634F2F6CC10984EF68216C75EA4F7A8E883DBA163C41CE2BA",
+ "2D64FC0371D08A7069997C1009814AF9C964DB64AEDB919AC229DA774AB09888",
+ "5F6481FC18E4CB223CB5BAB966165A1033349267702E7D75B5A0E5CACEA0E6A0",
+ "312170A9C271F67D09C8BE06A468106505CF6B7CD4DB1A40E02AF13213069EB0",
+ "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141"
+ ],
+ "valid_test_cases": [
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [0, 1, 2],
+ "aggnonce": "022265D41FDEFDC64072C7E168B345D547208C6E02E4D76E2F0D91C0773CF9FC250230B496E7FC1C45EFFB0687CFFC556FDA507F69CAE9894022828903DC3198DAFE",
+ "tweak_indices": [],
+ "is_xonly": [],
+ "psig_indices": [0, 1, 2],
+ "expected": "8471BE6E49D0E43097DD32DA374039149F5D00165A8AD369AE86E362D13730DA14A93293A0FFF4F9FDD438415DA4FDB4B008B2EB730110600208D3E1EC0945AC",
+ "comment": "Signing with minimum number of participants"
+ },
+ {
+ "id_indices": [2, 0, 1],
+ "pubshare_indices": [2, 0, 1],
+ "pubnonce_indices": [2, 0, 1],
+ "aggnonce": "022265D41FDEFDC64072C7E168B345D547208C6E02E4D76E2F0D91C0773CF9FC250230B496E7FC1C45EFFB0687CFFC556FDA507F69CAE9894022828903DC3198DAFE",
+ "tweak_indices": [],
+ "is_xonly": [],
+ "psig_indices": [2, 0, 1],
+ "expected": "8471BE6E49D0E43097DD32DA374039149F5D00165A8AD369AE86E362D13730DA14A93293A0FFF4F9FDD438415DA4FDB4B008B2EB730110600208D3E1EC0945AC",
+ "comment": "Order of the singer set shouldn't affect the aggregate signature. The expected value must match the previous test vector. "
+ },
+ {
+ "id_indices": [1, 3, 4],
+ "pubshare_indices": [1, 3, 4],
+ "pubnonce_indices": [1, 3, 4],
+ "aggnonce": "0248A0DA464AB5C69B7C0159A1D3773478D606AE3BFE38AC26556B3B4E5FA47668023848EEDE8406CDE99E2CA52D2135D9AC31BC5636DE8452C597A77611CBA9AFCC",
+ "tweak_indices": [0],
+ "is_xonly": [false],
+ "psig_indices": [3, 4, 5],
+ "expected": "BA3388FF06D512B23065196A8F8673EA2A6DBAE6714A3E634C258E176A009172462472BB88A65538F17BF5DC433EC01C1770CA5F233A2718662EA1019FDC3BB8",
+ "comment": "Signing with tweaked group public key"
+ },
+ {
+ "id_indices": [0, 1, 2, 3],
+ "pubshare_indices": [0, 1, 2, 3],
+ "pubnonce_indices": [0, 1, 2, 3],
+ "aggnonce": "03EE8C3A0DCB63B05B93370561E80BDA65BDB7412BD947F8CED8CE0B5574D87FC002D5E954284D0198FC64FFD0ABB50DF8B0C3A6B2B369A5DB3E318A058482B29BA1",
+ "tweak_indices": [0, 1, 2],
+ "is_xonly": [true, false, true],
+ "psig_indices": [6, 7, 8, 9],
+ "expected": "143C2B3A3F4847D0D9FA3D3D7EEF6135148345C0BB620707334ADF5F1395A17BB02DA3FEDB9108179331D06E0BBD34B19E3FFF0893616A2310D47F73077C5CD5",
+ "comment": ""
+ },
+ {
+ "id_indices": [0, 1, 2, 3, 4],
+ "pubshare_indices": [0, 1, 2, 3, 4],
+ "pubnonce_indices": [0, 1, 2, 3, 4],
+ "aggnonce": "03C03DE1E69FABAFE2BC9FF8940CD50BCCA1B5A35CB56A719264F8C93DA006837C03F59B87EEF390D4189504FFDE2EE709372E036DE71E0633A6B1D30D3A10EC6FFE",
+ "tweak_indices": [0, 1, 2],
+ "is_xonly": [true, false, true],
+ "psig_indices": [10, 11, 12, 13, 14],
+ "expected": "64F75B69667302B459330DE1221AEF5C8F04C44635E6078ED068344EF04FBA00273493772CABC9C2C87515F916118CCAB2D3902A6F5EAC6F155725B58DFCBBD3",
+ "comment": "Signing with max number of participants and tweaked group public key"
+ }
+ ],
+ "error_test_cases": [
+ {
+ "id_indices": [0, 1, 2, 3, 4],
+ "pubshare_indices": [0, 1, 2, 3, 4],
+ "pubnonce_indices": [0, 1, 2, 3, 4],
+ "aggnonce": "03C03DE1E69FABAFE2BC9FF8940CD50BCCA1B5A35CB56A719264F8C93DA006837C03F59B87EEF390D4189504FFDE2EE709372E036DE71E0633A6B1D30D3A10EC6FFE",
+ "tweak_indices": [0, 1, 2],
+ "is_xonly": [true, false, true],
+ "psig_indices": [10, 11, 15, 13, 14],
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": 3,
+ "contrib": "psig"
+ },
+ "comment": "Partial signature is invalid because it exceeds group size"
+ },
+ {
+ "id_indices": [0, 1, 2, 3, 4],
+ "pubshare_indices": [0, 1, 2, 3, 4],
+ "pubnonce_indices": [0, 1, 2, 3, 4],
+ "aggnonce": "03C03DE1E69FABAFE2BC9FF8940CD50BCCA1B5A35CB56A719264F8C93DA006837C03F59B87EEF390D4189504FFDE2EE709372E036DE71E0633A6B1D30D3A10EC6FFE",
+ "tweak_indices": [0, 1, 2],
+ "is_xonly": [true, false, true],
+ "psig_indices": [10, 11, 12, 13],
+ "error": {
+ "type": "value",
+ "message": "The psigs and ids arrays must have the same length."
+ },
+ "comment": "Partial signature count doesn't match the signer set count"
+ }
+ ]
+}
diff --git a/test/jmfrost/vectors/sign_verify_vectors.json b/test/jmfrost/vectors/sign_verify_vectors.json
new file mode 100644
index 0000000..94a5e11
--- /dev/null
+++ b/test/jmfrost/vectors/sign_verify_vectors.json
@@ -0,0 +1,339 @@
+{
+ "max_participants": 5,
+ "min_participants": 3,
+ "group_public_key": "037940B3ED1FDC360252A6F48058C7B94276DFB6AA2B7D51706FB48326B19E7AE1",
+ "secshare_p1":"81D0D40CDF044588167A987C14552954DB187AC5AD3B1CA40D7B03DCA32AFDFB",
+ "identifiers": [1, 2, 3, 4, 5],
+ "pubshares": [
+ "02BB66437FCAA01292BFB4BB6F19D67818FE693215C36C4663857F1DC8AB8BF4FA",
+ "02C3250013C86AA9C3011CD40B2658CBC5B950DD21FFAA4EDE1BB66E18A063CED5",
+ "03259D7068335012C08C5D80E181969ED7FFA08F7973E3ED9C8C0BFF3EC03C223E",
+ "02A22971750242F6DA35B8DB0DFE74F38A3227118B296ADD2C65E324E2B7EB20AD",
+ "03541293535BB662F8294C4BEB7EA25F55FEAE86C6BAE0CEBD741EAAA28639A6E6",
+ "020000000000000000000000000000000000000000000000000000000000000007"
+ ],
+ "secnonces_p1": [
+ "96DF27F46CB6E0399C7A02811F6A4D695BBD7174115477679E956658FF2E83D618E4F670DF3DEB215934E4F68D4EEC71055B87288947D75F6E1EA9037FF62173",
+ "00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
+ ],
+ "pubnonces": [
+ "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "02D26EF7E09A4BC0A2CF295720C64BAD56A28EF50B6BECBD59AF6F3ADE6C2480C503D11B9993AE4C2D38EA2591287F7B744976F0F0B79104B96D6399507FC533E893",
+ "03C7E3D6456228347B658911BF612967F36C7791C24F9607ADB34E09F8CC1126D803D2C9C6E3D1A11463F8C2D57B145A814F5D44FD1A42F7A024140AC30D48EE0BEE",
+ "036409E6BA4A00E148E9BE2D3B4217A74B3A65F0D75489176EF8A7D2BD699B949002B1E9FA2A8AE80CD7CE1593B51402B980B56896DB5B5C2B07EDA2C0CFEB08AD93",
+ "02464144C7AFAEF651F63E330B1FFF6EEC43991F9AE75AE6069796C097B04DAE720288B464788E5DFC9C2CCD6A3CCBBED643666749250012DA220D1C9FC559214270",
+ "0200000000000000000000000000000000000000000000000000000000000000090287BF891D2A6DEAEBADC909352AA9405D1428C15F4B75F04DAE642A95C2548480",
+ "03135EFD879EC3BC76E953758E0611E07057CA4F1EA935E8BA6151ED4696B7827A0397A1B70CF6403286EE0DD153DBFDCFBEE3D7A15745569C097A328C7CCB36E7E5"
+ ],
+ "aggnonces": [
+ "02047C99228CEA528AE200A82CBE4CD188BC67D58F537D1904A16B07FCDE07C3A6038708199DFA5BC5C41A0DD0FBD7D0620ADB4AC9991F7DB55A155CE9396AA80D1A",
+ "0365B60FA963FCB2ED1454264942397DBFC244A4B6CBE8FDEAF6FB23F14F76610002433AB9A295A67CD2B45B001B6F8154DC6619C994776EBF65A3C88A4BC94DBC98",
+ "03AB37C47419536990037B903428008878E4F395823A135C2B39E67FA850CFF41F028967ECFE399759125F59F7142B6580D91F70DE1C9E9C6B0F56754B64370A4438",
+ "0353365AF75F7C246089940D57D3265947A1D27576E411AE9C98702516C72DB51B02F5483E63F474BDD8EAC03F99276ED5A2ED31786F5B0F1A8706BE7367BC1D4555",
+ "000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
+ "048465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61037496A3CC86926D452CAFCFD55D25972CA1675D549310DE296BFF42F72EEEA8C9",
+ "028465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD61020000000000000000000000000000000000000000000000000000000000000009",
+ "028465FCF0BBDBCF443AABCCE533D42B4B5A10966AC09A49655E8C42DAAB8FCD6102FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEFFFFFC30"
+ ],
+ "msgs": [
+ "F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF",
+ "",
+ "2626262626262626262626262626262626262626262626262626262626262626262626262626"
+ ],
+ "valid_test_cases": [
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [0, 1, 2],
+ "aggnonce_index": 0,
+ "msg_index": 0,
+ "signer_index": 0,
+ "expected": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC",
+ "comment": "Signing with minimum number of participants"
+ },
+ {
+ "id_indices": [1, 0, 2],
+ "pubshare_indices": [1, 0, 2],
+ "pubnonce_indices": [1, 0, 2],
+ "aggnonce_index": 0,
+ "msg_index": 0,
+ "signer_index": 1,
+ "expected": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC",
+ "comment": "Partial-signature shouldn't change if the order of signers set changes (without changing secnonces)"
+ },
+ {
+ "id_indices": [2, 1, 0],
+ "pubshare_indices": [2, 1, 0],
+ "pubnonce_indices": [2, 1, 0],
+ "aggnonce_index": 0,
+ "msg_index": 0,
+ "signer_index": 2,
+ "expected": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC",
+ "comment": "Partial-signature shouldn't change if the order of signers set changes (without changing secnonces)"
+ },
+ {
+ "id_indices": [0, 3, 4],
+ "pubshare_indices": [0, 3, 4],
+ "pubnonce_indices": [0, 3, 4],
+ "aggnonce_index": 1,
+ "msg_index": 0,
+ "signer_index": 0,
+ "expected": "599723B8E16DA7D67A43EB09E6A990BF5BA7CD441657FE14D654E8C0523D0814",
+ "comment": "Partial-signature changes if the members of signers set changes"
+ },
+ {
+ "id_indices": [0, 1, 2, 3],
+ "pubshare_indices": [0, 1, 2, 3],
+ "pubnonce_indices": [0, 1, 2, 3],
+ "aggnonce_index": 2,
+ "msg_index": 0,
+ "signer_index": 0,
+ "expected": "6762C37ABF433C029E6698B435D5F7BE634D7B64A8151ACB07402465DB7D4057"
+ },
+ {
+ "id_indices": [0, 1, 2, 3, 4],
+ "pubshare_indices": [0, 1, 2, 3, 4],
+ "pubnonce_indices": [0, 1, 2, 3, 4],
+ "aggnonce_index": 3,
+ "msg_index": 0,
+ "signer_index": 0,
+ "expected": "32D17330BF21D4D058E52A07E86F21D653ED697C1CCFE6F4D17084EF5C99FF18",
+ "comment": "Signing with maximum number of participants"
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [0, 1, 6],
+ "aggnonce_index": 4,
+ "msg_index": 0,
+ "signer_index": 0,
+ "expected": "16B1E11E2BB93911E0422715FD03C0C8F1B7845B6A69F8BB8AB90155D91C25B5",
+ "comment": "Both halves of aggregate nonce correspond to point at infinity"
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [0, 1, 2],
+ "aggnonce_index": 0,
+ "msg_index": 1,
+ "signer_index": 0,
+ "expected": "E1915436DC7D4BC162842A3C1BAA16E82A8D64056A02C5D2BD75784B604C23CD",
+ "comment": "Empty message"
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [0, 1, 2],
+ "aggnonce_index": 0,
+ "msg_index": 2,
+ "signer_index": 0,
+ "expected": "78E04F68D813CBAF68CB5E19A835C69B833138ED18BDE63CB399F52F559EAA17",
+ "comment": "Message longer than 32 bytes (38-byte msg)"
+ }
+ ],
+ "sign_error_test_cases": [
+ {
+ "id_indices": [3, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "aggnonce_index": 0,
+ "msg_index": 0,
+ "signer_index": null,
+ "signer_id": 1,
+ "secnonce_index": 0,
+ "error": {
+ "type": "value",
+ "message": "The signer's id must be present in the participant identifier list."
+ },
+ "comment": "The signer's id is not in the participant identifier list."
+ },
+ {
+ "id_indices": [0, 1, 2, 1],
+ "pubshare_indices": [0, 1, 2, 1],
+ "aggnonce_index": 0,
+ "msg_index": 0,
+ "signer_index": 0,
+ "secnonce_index": 0,
+ "error": {
+ "type": "value",
+ "message": "The participant identifier list must contain unique elements."
+ },
+ "comment": "The participant identifier list contains duplicate elements."
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [3, 1, 2],
+ "aggnonce_index": 0,
+ "msg_index": 0,
+ "signer_index": 0,
+ "secnonce_index": 0,
+ "error": {
+ "type": "value",
+ "message": "The signer's pubshare must be included in the list of pubshares."
+ },
+ "comment": "The signer's pubshare is not in the list of pubshares. This test case is optional: it can be skipped by implementations that do not check that the signer's pubshare is included in the list of pubshares."
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1],
+ "aggnonce_index": 0,
+ "msg_index": 0,
+ "signer_index": 0,
+ "secnonce_index": 0,
+ "error": {
+ "type": "value",
+ "message": "The pubshares and ids arrays must have the same length."
+ },
+ "comment": "The participant identifiers count exceed the participant public shares count"
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 5],
+ "aggnonce_index": 0,
+ "msg_index": 0,
+ "signer_index": 0,
+ "secnonce_index": 0,
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": 3,
+ "contrib": "pubshare"
+ },
+ "comment": "Signer 3 provided an invalid participant public share"
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "aggnonce_index": 5,
+ "msg_index": 0,
+ "signer_index": 0,
+ "secnonce_index": 0,
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": null,
+ "contrib": "aggnonce"
+ },
+ "comment": "Aggregate nonce is invalid due wrong tag, 0x04, in the first half"
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "aggnonce_index": 6,
+ "msg_index": 0,
+ "signer_index": 0,
+ "secnonce_index": 0,
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": null,
+ "contrib": "aggnonce"
+ },
+ "comment": "Aggregate nonce is invalid because the second half does not correspond to an X coordinate"
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "aggnonce_index": 7,
+ "msg_index": 0,
+ "signer_index": 0,
+ "secnonce_index": 0,
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": null,
+ "contrib": "aggnonce"
+ },
+ "comment": "Aggregate nonce is invalid because second half exceeds field size"
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "aggnonce_index": 0,
+ "msg_index": 0,
+ "signer_index": 0,
+ "secnonce_index": 1,
+ "error": {
+ "type": "value",
+ "message": "first secnonce value is out of range."
+ },
+ "comment": "Secnonce is invalid which may indicate nonce reuse"
+ }
+ ],
+ "verify_fail_test_cases": [
+ {
+ "psig": "21255BB1924800E4BF273455BB20C07239F15FFA4526F218CC8386E10A981655",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [0, 1, 2],
+ "msg_index": 0,
+ "signer_index": 0,
+ "comment": "Wrong signature (which is equal to the negation of valid signature)"
+ },
+ {
+ "psig": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [0, 1, 2],
+ "msg_index": 0,
+ "signer_index": 1,
+ "comment": "Wrong signer"
+ },
+ {
+ "psig": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [3, 1, 2],
+ "pubnonce_indices": [0, 1, 2],
+ "msg_index": 0,
+ "signer_index": 0,
+ "comment": "The signer's pubshare is not in the list of pubshares"
+ },
+ {
+ "psig": "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [0, 1, 2],
+ "msg_index": 0,
+ "signer_index": 0,
+ "comment": "Signature exceeds group size"
+ }
+ ],
+ "verify_error_test_cases": [
+ {
+ "psig": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [5, 1, 2],
+ "msg_index": 0,
+ "signer_index": 0,
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": 1,
+ "contrib": "pubnonce"
+ },
+ "comment": "Invalid pubnonce"
+ },
+ {
+ "psig": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [5, 1, 2],
+ "pubnonce_indices": [0, 1, 2],
+ "msg_index": 0,
+ "signer_index": 0,
+ "error": {
+ "type": "invalid_contribution",
+ "signer_id": 1,
+ "contrib": "pubshare"
+ },
+ "comment": "Invalid pubshare"
+ },
+ {
+ "psig": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC",
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [0, 1, 2, 3],
+ "msg_index": 0,
+ "signer_index": 0,
+ "error": {
+ "type": "value",
+ "message": "The ids, pubnonces and pubshares arrays must have the same length."
+ },
+ "comment": "public nonces count is greater than ids and pubshares"
+ }
+ ]
+}
diff --git a/test/jmfrost/vectors/tweak_vectors.json b/test/jmfrost/vectors/tweak_vectors.json
new file mode 100644
index 0000000..13564e1
--- /dev/null
+++ b/test/jmfrost/vectors/tweak_vectors.json
@@ -0,0 +1,164 @@
+{
+ "max_participants": 5,
+ "min_participants": 3,
+ "group_public_key": "037940B3ED1FDC360252A6F48058C7B94276DFB6AA2B7D51706FB48326B19E7AE1",
+ "secshare_p1":"81D0D40CDF044588167A987C14552954DB187AC5AD3B1CA40D7B03DCA32AFDFB",
+ "identifiers": [1, 2, 3, 4, 5],
+ "pubshares": [
+ "02BB66437FCAA01292BFB4BB6F19D67818FE693215C36C4663857F1DC8AB8BF4FA",
+ "02C3250013C86AA9C3011CD40B2658CBC5B950DD21FFAA4EDE1BB66E18A063CED5",
+ "03259D7068335012C08C5D80E181969ED7FFA08F7973E3ED9C8C0BFF3EC03C223E",
+ "02A22971750242F6DA35B8DB0DFE74F38A3227118B296ADD2C65E324E2B7EB20AD",
+ "03541293535BB662F8294C4BEB7EA25F55FEAE86C6BAE0CEBD741EAAA28639A6E6"
+ ],
+ "secnonce_p1":"96DF27F46CB6E0399C7A02811F6A4D695BBD7174115477679E956658FF2E83D618E4F670DF3DEB215934E4F68D4EEC71055B87288947D75F6E1EA9037FF62173",
+ "pubnonces": [
+ "02FCDBEE416E4426FB4004BAB2B416164845DEC27337AD2B96184236D715965AB2039F71F389F6808DC6176F062F80531E13EA5BC2612B690FC284AE66C2CD859CE9",
+ "02D26EF7E09A4BC0A2CF295720C64BAD56A28EF50B6BECBD59AF6F3ADE6C2480C503D11B9993AE4C2D38EA2591287F7B744976F0F0B79104B96D6399507FC533E893",
+ "03C7E3D6456228347B658911BF612967F36C7791C24F9607ADB34E09F8CC1126D803D2C9C6E3D1A11463F8C2D57B145A814F5D44FD1A42F7A024140AC30D48EE0BEE",
+ "036409E6BA4A00E148E9BE2D3B4217A74B3A65F0D75489176EF8A7D2BD699B949002B1E9FA2A8AE80CD7CE1593B51402B980B56896DB5B5C2B07EDA2C0CFEB08AD93",
+ "02464144C7AFAEF651F63E330B1FFF6EEC43991F9AE75AE6069796C097B04DAE720288B464788E5DFC9C2CCD6A3CCBBED643666749250012DA220D1C9FC559214270"
+ ],
+ "aggnonces": [
+ "02047C99228CEA528AE200A82CBE4CD188BC67D58F537D1904A16B07FCDE07C3A6038708199DFA5BC5C41A0DD0FBD7D0620ADB4AC9991F7DB55A155CE9396AA80D1A",
+ "03AB37C47419536990037B903428008878E4F395823A135C2B39E67FA850CFF41F028967ECFE399759125F59F7142B6580D91F70DE1C9E9C6B0F56754B64370A4438",
+ "0353365AF75F7C246089940D57D3265947A1D27576E411AE9C98702516C72DB51B02F5483E63F474BDD8EAC03F99276ED5A2ED31786F5B0F1A8706BE7367BC1D4555"
+ ],
+ "tweaks": [
+ "E8F791FF9225A2AF0102AFFF4A9A723D9612A682A25EBE79802B263CDFCD83BB",
+ "AE2EA797CC0FE72AC5B97B97F3C6957D7E4199A167A58EB08BCAFFDA70AC0455",
+ "F52ECBC565B3D8BEA2DFD5B75A4F457E54369809322E4120831626F290FA87E0",
+ "1969AD73CC177FA0B4FCED6DF1F7BF9907E665FDE9BA196A74FED0A3CF5AEF9D",
+ "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141"
+ ],
+ "msg": "F95466D086770E689964664219266FE5ED215C92AE20BAB5C9D79ADDDDF3C0CF",
+ "valid_test_cases": [
+ {
+ "id_indices": [1, 2, 0],
+ "pubshare_indices": [1, 2, 0],
+ "pubnonce_indices": [1, 2, 0],
+ "tweak_indices": [],
+ "is_xonly": [],
+ "aggnonce_index": 0,
+ "signer_index": 2,
+ "expected": "DEDAA44E6DB7FF1B40D8CBAA44DF3F8C80BD7CEC6A21AE22F34ED7ABC59E2AEC",
+ "comment": "No tweak. The expected value (partial sig) must match the signing with untweaked group public key."
+ },
+ {
+ "id_indices": [1, 2, 0],
+ "pubshare_indices": [1, 2, 0],
+ "pubnonce_indices": [1, 2, 0],
+ "tweak_indices": [0],
+ "is_xonly": [true],
+ "aggnonce_index": 0,
+ "signer_index": 2,
+ "expected": "00A84851A7D3F53B94FDFDE0BE6C6DCE570B7FF27E8B77FDF75AFF52066F42EE",
+ "comment": "A single x-only tweak"
+ },
+ {
+ "id_indices": [1, 2, 0],
+ "pubshare_indices": [1, 2, 0],
+ "pubnonce_indices": [1, 2, 0],
+ "tweak_indices": [0],
+ "is_xonly": [false],
+ "aggnonce_index": 0,
+ "signer_index": 2,
+ "expected": "FC2D7852AAEF8F3C229FEC7E6B496999C52857387E4274CD2F7625CD4B262D73",
+ "comment": "A single plain tweak"
+ },
+ {
+ "id_indices": [1, 2, 0],
+ "pubshare_indices": [1, 2, 0],
+ "pubnonce_indices": [1, 2, 0],
+ "tweak_indices": [0, 1],
+ "aggnonce_index": 0,
+ "is_xonly": [false, true],
+ "signer_index": 2,
+ "expected": "1634928A5951F23E77DB9D6171E89A04E55B2BC07A492CFE68B611303C96957A",
+ "comment": "A plain tweak followed by an x-only tweak"
+ },
+ {
+ "id_indices": [1, 2, 0],
+ "pubshare_indices": [1, 2, 0],
+ "pubnonce_indices": [1, 2, 0],
+ "tweak_indices": [0, 1, 2, 3],
+ "aggnonce_index": 0,
+ "is_xonly": [true, false, true, false],
+ "signer_index": 2,
+ "expected": "4252C4EA9641F1B8C502F3B63C3D0AFEF3274CFE7C70D94AE2F2DC54FA16D216",
+ "comment": "Four tweaks: x-only, plain, x-only, plain. If an implementation prohibits applying plain tweaks after x-only tweaks, it can skip this test vector or return an error."
+ },
+ {
+ "id_indices": [1, 2, 0],
+ "pubshare_indices": [1, 2, 0],
+ "pubnonce_indices": [1, 2, 0],
+ "tweak_indices": [0, 1, 2, 3],
+ "aggnonce_index": 0,
+ "is_xonly": [false, false, true, true],
+ "signer_index": 2,
+ "expected": "CF079FD835F00CF6A737FDC19D602AA445C95825B6A5D1C0FFB32A848427F49E",
+ "comment": "Four tweaks: plain, plain, x-only, x-only."
+ },
+ {
+ "id_indices": [0, 1, 2],
+ "pubshare_indices": [0, 1, 2],
+ "pubnonce_indices": [0, 1, 2],
+ "tweak_indices": [0, 1, 2, 3],
+ "aggnonce_index": 0,
+ "is_xonly": [false, false, true, true],
+ "signer_index": 0,
+ "expected": "CF079FD835F00CF6A737FDC19D602AA445C95825B6A5D1C0FFB32A848427F49E",
+ "comment": "Order of the signers shouldn't affect tweaking. The expected value (partial sig) must match the previous test vector."
+ },
+ {
+ "id_indices": [0, 1, 2, 3],
+ "pubshare_indices": [0, 1, 2, 3],
+ "pubnonce_indices": [0, 1, 2, 3],
+ "tweak_indices": [0, 1, 2, 3],
+ "aggnonce_index": 1,
+ "is_xonly": [false, false, true, true],
+ "signer_index": 0,
+ "expected": "22B8AE565FB2A52E07F1D6D0B5F85DD16932ADF77C0D61C473554133C22EE617",
+ "comment": "Number of the signers won't affect tweaking but the expected value (partial sig) will change because of interpolating value."
+ },
+ {
+ "id_indices": [0, 1, 2, 3, 4],
+ "pubshare_indices": [0, 1, 2, 3, 4],
+ "pubnonce_indices": [0, 1, 2, 3, 4],
+ "tweak_indices": [0, 1, 2, 3],
+ "aggnonce_index": 2,
+ "is_xonly": [false, false, true, true],
+ "signer_index": 0,
+ "expected": "7BCA92625F1C83D1EE6A855A198D25410BBE3867E2B61400A02D12BA2D6E2384",
+ "comment": "Tweaking with maximum possible signers"
+ }
+ ],
+ "error_test_cases": [
+ {
+ "id_indices": [1, 2, 0],
+ "pubshare_indices": [1, 2, 0],
+ "tweak_indices": [4],
+ "aggnonce_index": 0,
+ "is_xonly": [false],
+ "signer_index": 2,
+ "error": {
+ "type": "value",
+ "message": "The tweak must be less than n."
+ },
+ "comment": "Tweak is invalid because it exceeds group size"
+ },
+ {
+ "id_indices": [1, 2, 0],
+ "pubshare_indices": [1, 2, 0],
+ "tweak_indices": [0, 1, 2, 3],
+ "aggnonce_index": 0,
+ "is_xonly": [false, false],
+ "signer_index": 2,
+ "error": {
+ "type": "value",
+ "message": "The tweaks and is_xonly arrays must have the same length."
+ },
+ "comment": "Tweaks count doesn't match the tweak modes count"
+ }
+ ]
+}