millix

job engine

overview

each millix client has a finite amount of resources to manage and spend and the following fundamental set of priorities:

  1. health: responsiveness, availability, disk, network, cpu and memory consumption

  2. data coverage and currency: storing the amount and currency of data available or desired

  3. speed and efficiency: utilize the resources available as efficiently as possible to perform the most operations as possible

these three tactical priorities represent how node operators achieve their strategic priority: maximum earnings. the health of a node should exist in isolation from other nodes. the behavior of node A shouldn't be allowed to affect the health of node B.

the protocol codebase should contain only what is necessary to complete broad functions, but should make no decisions about which functions to perform or when to perform them. no sql statements should exist inline in the node code, except for what is required to access the job engine. to the maximum extent possible, every job should be represented by an API call that the node accepts and executes.

all jobs that a node can perform should be listed in a job catalog that is accessible to view and edit through a user interface, and that the node operator or node itself can edit to reprioritize, reschedule it's own jobs or spawn new jobs based on its load. in this way, a powerful server may run a different catalog and configuration of jobs than an inexpensive laptop. by separating the node functionality from the job catalog, nodes can be configured to run niche operations without changing the code. for example, a node configured to send bulk transactions may have different job configurations than a node configured to store data and not transact. if the jobs are properly separated from the node, the job catalog on one node could coordinate the jobs performed on another node that it can authenticate to.

tables

object

object_id
object_name
object_name_field
object_key
object_type
id_length
search_prefix
time_to_live
allow_prune
allow_list
conn_string
status
create_date

job_type - lookup

job_type_id
job_type (function, sql, url, rpc call)
status
create_date

job_group - lookup

job_group_id
job_group_name (housekeeping, fee validation, transaction validation, audit point, data pruning, reporting, peer connections, data sync)
status
create_date

job_processor - represents a thread or instantiation

processor_id
ip_address
port
rpc_user
rpc_password
status
create_date

job - list and definition of job with instructions to execute

job_id
job_name
processor_id
job_group_id
job_type_id
job_payload (json indicating the function/url/rpc name and parameters provided to execute)
timeout (in milliseconds. 0=no timeout)
run_always (0|1=run as often as possible when resources are available)
run_every (minutes)
run_on_the (number of minutes past each hour)
run_at (specific time)
run_date (day of the month)
in_progress
last_date_begin
last_date_end
last_elapse
last_response
priority
status
create_date

example job names

job_state_reset (resets in_progress value for jobs that have not completed in X time)
clock_get_ntp_time
clock_validate_peer_time
backup_wallet
db_maintenance_size (vacuum)
stat_device_cpu_available
stat_device_disk_available
stat_device_network_available
stat_device_memory_available
peer_list
peer_connect
peer_disconnect
audit_point_generate
transaction_prune
transaction_input_prune
transaction_output_prune
transaction_unspent_consolidate
fee_stat_transaction_average
fee_stat_storage_average
object_stat_count
alert_

using this model the node installation package would have a job.conf file that contains its initial set of job instructions.

the job.conf file would assume the lowest support hardware configuration and would create the maximum number of job processor records in table job_processor that the hardware could support. it would then create a full catalog of job records, and activate the minimum requirement in order of priority and only add or activate additional job records as the system determined that it would be healthy to do. an advanced user could deploy their own custom job.conf file depending on their hardware and tactical objectives.

the node would query the job_processor table to determine how many threads are authorized. each thread (process_id) would query the job table for a list of active jobs that were assigned to the process_id, available to be worked on based on run_ assignment, not currently in progress, ordered on priority compared to other jobs assigned to the process_id.

process_ids could be organized such that only one process was assigned jobs causing writes to a specific table (object_id), which in conjunction with the serialized nature of the priority, would prevent db locks.

picking up a job to be worked on would set last_date_begin with a millisecond precise timestamp and job.in_progress = 1 to prevent unintended parallel work. the process_id executes the job as instructed in the job_payload and when completed, updates job.in_progress = 0, job.last_date_end and job.last_elapse. the process_id then moves on to the next job and repeats.

with a processor assigned to object_id, and no other processor assigned to write to that object_id, a node could prioritize audit points and pruning over data synching and receiving new data from peers. in this case no new data is written until the node is healthy to store and process additional data.

if the device has available resources and a processor is fully utilized, an additional processor can be created with additional jobs assigned to it.

additional tables

by storing historical data the node operator and job engine is better informed to alter its configuration to achieve the tactical and strategic objectives.

stat

separate spec

stat_type - lookup

separate spec

example stat type

object.record_count
object.file_size
object.count_insert
object.date_recent_insert
object.count_prune
object.date_recent_prune
job_processor.stat_elapsed_time_sum_run_always_minute
job_processor.stat_elapsed_time_run_every_sum
job_processor.stat_elapsed_time_run_on_the_sum
job_processor.stat_elapsed_time_run_date_sum

whitepaper

overview

millix is an open source cryptocurrency project. millix is fully decentralized, designed for simplicity, transacts at very high speed and very large scale.

work began on the project in spring of 2018 by a diverse group of developers and business professionals. their motivation to create a cryptocurrency protocol with the use case potential of millix came from their backgrounds building:

  • social network platforms

  • content management systems

  • e-commerce systems

  • data distribution systems

  • online financial services

  • communication services

  • affiliate marketing

  • manufacturing and logistics operations

  • gaming platforms

  • accounting and legal practices

fundamentally, there was a recognition that

“all activity benefits from trusted
transactions at scale”

which influenced the following set of first principles:

  • currencies should not be created with debt

  • currencies should operate at infinite scale

  • the cost of securing value can't exceed the value it secures

  • a currency's market value should be proportionate to its fundamental value

  • participants that increase fundamental value should be compensated

  • currencies should function without carrying the weight of previous transactions

  • currencies should work the same throughout the spectrum of transaction values

  • modern currencies should be at least as simple to use as primitive currencies

  • simplicity at the edge is only possible with equal simplicity in the foundation

to the extent there is an inverted correlation between utility and a store of value, millix is not intended to compete with the use case or feature set of blockchain projects. the utility that comes from scale and speed has been prioritized, leading to the principles and methodologies described above.

learn more: meet the millix team

case study: the speed and scale of millix

Developers: millix certification for technical and non-technical careers

economy

the total allocation of 9,000,000,000,000,000 mlx (nine quadrillion millix) were created in a genesis event on January 20th , 2020. millix is not being offered directly for sale. instead it will be distributed to participants who support and improve the millix ecosystem.

participants running millix software constantly receive millix for performing protocol related tasks that improve the millix ecosystem, such as storing transaction data and verifying transactions.

the economy is envisioned to allow any participant to incentivize any computing activity by any other participant via fees.

learn more: how can I earn, buy or sell millix?

use case: millix turns unused computer power into income

technology

unlike blockchain cryptocurrencies, millix is built on the logic of Directed Acyclic Graph (DAG) for transaction speed and capacity that increases as more transactions and users are added.

millix is original work and was not built on a copied code base of existing work. millix has no points of centralization. millix has no designed bottlenecks. millix has no hierarchy of participants of capabilities. millix data is natively sharded.

learn more: how does millix data sharding work

learn more: the millix toolbox of trust

learn more: how do transactions work

learn more: how do transactions get verified

learn more: how does the millix network work

developers: full API documentation and tutorials

roadmap

the initial millix road map consisted of the following themes:

  • base organization and functionality

  • connecting millix protocol to machines for large scale transactions

  • large scale transaction speed and storage

  • a community of earning

  • store and own your personal data

developers: contribute to the millix project

learn more: see the current millix road map with dates

get started

millix is available to download for windows, mac, and linux at millix.org/client.