3 Oct 2024

Be More Human: Our Ideals

Be More Human: Our Ideals


TL;Dr:

Our Core Ideals:

  • Serve people over systems

  • Ask: “For whom is this making the world a better place?”

  • Our employees are people first, and their lives are more important than their jobs (no grindset!)

  • To make our technology available to as many people as possible. At all times, consider the customer’s ability to pay, and not just their willingness to do so.


  • The data that our users generate should belong to them. Our users are not our “product.”


Our Ideals (The Longer Version)

Ideals aren't much good if there's no one to hold you to them. That's what we hope to do here: to give you a snapshot of the founding ideals and principles behind doFlo. Openness is vital to accountability.

doFlo is an automation company. It’s reasonable for you to wonder what we’re trying to accomplish and what our vision for the future looks like. That’s what this post aims to do: to give you a sense of who we are and what we want. To engage thoughtfully and constructively, we have to make a clear statement about what we believe in. 

Ai Vs. Automation 

DoFlo is not an Ai company. We're an automation company, but in popular discourse today, there is little distinction between the two. Yet we wish to mark this distinction clearly and to explain it. Ai, better described as Machine Learning (ML), and yet better described using the specific technology about which we hear so much these days: Large Language Models (LLMs), Deep Learning Models (DLMs), is at once a complex and simple technology. It's complex because it involves the use of unfathomably large amounts of data to "train" Ai models to do things we find useful. Yet it's also quite simple, because, at their base, LLMs and DLMs are self-organizing neural networks that do the same things the human brain does, just on a more focused scale, and with a much more limited set of data.

LLMs and DLMs are not, as companies like Open.Ai like to pretend in an endless parade of publicity campaigns, a "stepping stone" to General Artificial Intelligence (AGI), the fabled self-improving Ai engine that some hope will form the "singularity," or a point at which Ai grow exponentially in capability until it outstrips humans in every way. Instead, they're better understood as probability engines. Feed in some data (and by some we mean Petabytes of text or image data, all rendered into binary code), and you get a "most likely" output as the result. This is a neat trick: the ability for a machine to take in a text prompt and output an image, or the reverse, is quite cool.

Ai isn't people

It's important to mark out at this point, however, that LLMs and DLMs do not mimic human thought, the human brain, or the mind. They only appear to do this. They are digital neural networks, that mimic the action of biological neurons and their learning mechanisms, but they are not actually neurons, and they don't actually form thinking organisms. Nor, it's worth mentioning, will they probably ever do so.

Human minds are the natural result of a billion years of biological evolution; machine learning creates neural networks that are the artificial result of running a probability analysis on a constrained set of data. By constrained, we mean that the petabytes of information on which ChatGPT is trained is large, but it's absolutely nothing compared to the experience of a human mind and brain as it develops. Nor does an LLM or DLM have the biological mechanisms that are present in a human brain, and help it to process information in a useful way.

At its very core, the only thing an LLM or a DLM actually does is move 1s and 0s around in a matrix of 1s and 0s. The neural network doesn't really "know" what those 1s and 0s are, nor could it ever care. 

We aren't an Ai company

When we say we're an automation company, and not an Ai company, what we explicitly mean is that we don't concern ourselves with how those 1s and 0s work. We may apply ML to what we do, and enable its use, but we don't create ML. We help our customers design and run automations: processes that deal with real human needs of the daily, hourly or momentary variety.

We help our customers make real things happen. Those real things involve different digital tools and source of data that need to be connected in constructive ways, so that humans can get useful work out of them. If that process involves using an LLM or a DLM to make the job easier, we will. However, we don't think that makes us an Ai company, any more than the person who uses a search engine is a "search engine programmer," even though technically, they kind of are.

Fear of Ai

Fears of Ai have traditionally focused on killer robots from the future or malevolent machine intelligence. Those fears are understandable, if a bit misleading. The idea that an Ai might decide that humanity "threatens its existence" runs counter to what we understand about machine learning.

Why? Simply: Ai is not alive. It doesn't have instincts as an animal has, or executive functioning as one might find in a complex animal. Executive functioning is the drive in a biological organism to keep itself alive and to reproduce. It's the thing that makes you want to live and makes you want to eat, and makes you want to have children. It's the thing that makes you at once an animal and a person. It's need. Want. Desire. Fear. Love. 

At most, a machine is a collection of reflexes. Nothing more. It's more comparable to a species of mushroom than to a thinking being. It evolves, in a sense, but that evolution is always restricted to 1s and 0s in a matrix of 1s and 0s. Nothing more. Does an Ai know what those 1s and 0s are? No it does not. Does it care what they are? Also no. Will it begin to care at some future point? Well, some would like to think so, but we have grave doubts that any ML based digital technology could ever develop anything like a biological executive function.

Tireless and reliable

This is both a good and a bad thing. It's good in the sense that ML can work tirelessly and reliably, on tasks that take quite a bit of human attention, and introduce many possibilities for simple errors that our brains are not equipped to deal with. The bad is that an Ai, also has no understanding of human morality or law. The desire to follow rules or do the "right thing" is also part of executive functioning, and so can never be completely guaranteed with ML.

This is partly why thinkers like Noam Chomsky have referred to ML as a system of "obfuscated plagiarism." There is some justice in that description. An Ai doesn't understand what ownership is, nor does it care whether what it does infringes upon someone's intellectual property. It doesn't care in either the moral or the legal sense. Those concepts don't have any meaning.

Possible and impossible

Another problem is one Chomsky gives special attention: 

But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. 

Guardrails introduced to stop Ai systems from stealing intellectual property are, for the most part, band-aids on a fundamental limitation of the technology. As long as there is no equivalent of human judgement going into the training of an LLM or DLM, it will always be one step away from either stealing, or simply lying, as Ais have been caught doing countless times already. The embarrassing nature of these failures, as well as their regularity, certainly signals that the creators of these systems have few tools at hand to stop this from happening — or else they would have already.

This is not to say our world is not about to change drastically and irreversibly because of our inventions. Even a fungus can be dangerous, particularly if it can be weaponized by humans. The impact of Ai on our daily lives will be profound, in ways seen and unseen.

Proponents will point out that a fungus can be the next penicillin, and it could be. But it could also be the next toxoplasmosis.

The power to mislead

Ai-generated text can and will be used to mislead. Ai-generated sound and imagery will be used to lie and deceive. They already are.

Who hasn’t begun to ask themselves, as they browse the news or social media, how much of the text and images they see already comes from an algorithm designed to trick them, and how much of the imagery they’re seeing is real? A media landscape and discourse already intensified in its negative effects by "filter bubbles" and "rage engines" can only become more potent if it's filled with what Chomsky refers to as "infinite variety from finite resources."

There is nothing stopping Ai from completely crowding out all human thought in our shared discourse, and that has scary implications for thought.

It also has scary implications when applied outside the narrow realm of media and discourse. Recently Larry Ellison suggested blithely that Ai driven universal surveillance could "keep us on our best behavior."

Well, it might do that. But to go out on a limb: we don't think living in a panopticon society is a desirably vision of the future. 

Full Disclosure: The primary author of this post also wrote the script for the above video. 

The Most Dangerous Animal

It seems inarguable that we live in some of the most economically and socially uncertain times in recent history. Even the nature of truth and facts is under assault from factions that would like to use modern technology to change our very experience of reality itself, creating new "alternative' realities in the form of virtual or mixed reality, and appending ML based Ai to everything from our work to our social interactions. That sense of uncertainty and unreality will only grow.

Yet the dangers we face are really human in nature, even if ChatGPT and DALL-E have given them a digital form. The real dangers arise from what human beings do with the technology we conceive and from the fact that there will always be those who will be willing to hurt others (and themselves), to achieve a political or economic goal.

The tech industry has failed at every stage to assess the capacity for our inventions to have unintended consequences.

But that is not because the technology is dangerous. It is because people are. The story is an old one, and familiar to everyone: people are good at imagining the immediate prosperity and happiness that some piece of "newness" will bring into their lives. Yet that newness brings many unwanted and unsolved challenges; some of which are beyond difficult to solve. What is new and exciting often becomes real and horrifying.

The widespread modern concern over Large Language Model (LLM) Ai systems is reasonable. Though crucially not because machines are taking over. The anxieties of today stem from our present challenges: mounting ecological emergency, spiraling wealth inequality, resource depletion, and consequent global political instability. The wars for food and water resources have already begun.

We have already seen how propaganda and disinformation can turn people against each other.

An Age of Anxiety

We are living through a new age of anxiety. It is typified by fear of the consequences of globalism and technology innovation. That anxiety also prompts a renewed focus on the self, mental well-being, and the role of the individual in society.

We believe those are positive developments.

Our mounting problems come at the same time as the sudden availability of so much information and so much communication that we have a growing awareness of our shared challenges. But that awareness is not being met with the progress we desperately need to avoid catastrophe. Thus: anxiety.

When taken together, it's easier to see that this fear of Ai is really an embodiment of that anxiety: the realization that there is not enough to go around and that if we are willing to lie and deceive each other, others will happily lie to and deceive us. This has driven a desire to look inward, to our closer communities of shared interest for a sense of safety. We feel that we are not being tricked or cajoled or manipulated within interactions we control.

Yet whatever good this renewed focus on wellness may confer, it comes at a cost. Far from building a shared global understanding of our challenges and their solutions, we have weaponized our information economy to deny these problems and to place the blame and the onus of change elsewhere; somewhere we are not responsible.

We now understand that our exploitation of the world’s resources is bringing ruin, but we are responding to this unpleasant reality with a war, not on the problem, but on reality itself. Rather than mustering the force of scientific consensus to tackle the problem, our scientific community is too busy defending itself from constant attack — as if questioning the consensus will change the facts.

Doom scrolling

So here we are, in a world where we can vividly see and prolifically discuss our impending doom, and we can’t seem to do anything about it. We can see that Ai technology is progressing incredibly fast… but we cannot see how that technology will solve our problems, and we’re increasingly concerned that it will only make things worse. That's where we are, stuck between the desire to deny culpability and to place blame. 

One look at our popular entertainment today tells that story. Succession, Chernobyl, Oppenheimer: these are stories more about the consequences of irresponsibility and lies per se, than about family, communism, or the atomic bomb. They are stories about how we refuse to face the consequences of our own actions. The story of the past five years has been defined by irresponsibility girded with lies: as if a global pandemic could be got through by doing anything but acknowledging the conditions that created it or by taking the steps necessary to resolve it.

These are the perfect conditions for a panic. Perhaps that panic has already begun.

“A Better Place for Whom?”

This is where we step in, as a company focused on what is often called “Robotic Process Automation (RPA),” but which we prefer to call a “Universal Application Programming Interface (UAPI),” a program that connects to all other programs in order to allow a person to leverage all the power of modern computing for their own use.

We are building a technology that should, done correctly, rival the ability of the computer in Star Trek: The Next Generation: a way for you to use plain language to access any computer system or program you can connect to and make it do what you want it to do: provided that is allowed by the program or system, and that it is legal.

In fact, it seems genuinely strange to us that this does not seem to be the priority of so-called "Ai companies," like OpenAi, which seem more interested in the possibilities of displacing human workers than enabling a future where workers are set free from the tedious digital tasks that now practically consume them.

Why this tendency to look upon every new technology breakthrough as a way to get rid of human beings in every organization, from government, to school, to business? Aren't the human beings the greatest asset we have? Shouldn't Ai make every human being even more valuable and productive?

Machine knowledge is not human knowledge. Machines do not improvise or adapt. They do not grow. Human beings do. But it can hardly be denied that the primacy of human beings in all organizations is being challenged.

Technology Alone Solves Nothing

In this atmosphere of panic and change, there are these two competing desires we mentioned: the desire to brightside, and the desire to doomsay. And they both have to be dealt with.

Having the computer from Star Trek cannot mean that we have the reality of the show Star Trek. That reality doesn't exist because of that computer, it exists alongside it.

Star Trek is not about technology anyway: it's about people. People who make the right decisions for all of humanity. The technology they have is what helps them to do the good they do, but it's always the people who actually do good.

We sincerely believe that giving people this ability can make the world a better place, but critically, we do not believe that this alone will do it, nor are we so naive as to assume that it couldn’t be used to do the opposite of what we intend. We are all flawed people, and therefore, any system we build will reflect our own weaknesses and myopias.

When a company says that they’re “making the world a better place,” with technology, one should always immediately ask: “A better place for whom?” Technologists are prone to techno-solutionism. That's the idea that there is always a technological solution to a societal, scientific, or historical problem.

Our technology will not save the world.

We would love for it to make the world a better place, as so many companies have promised to do. But we cannot even make this guarantee. Instead, we can only talk about what we wish to do. One thing that we wish to do is, to be honest and operate with our eyes open.

That means practicing humility and understanding that we don’t know everything.

Big Data And Risk

There is a common misunderstanding about the era of Big Data that we would like to correct.

For years, people have been warned about invasions of privacy, or of mass behavioral manipulation. Those are valid concerns, but the real driver of the Big Data economy is the understanding of risk. It is an understanding and mitigation of risk that is really driving the big data economy.

Access to more and better data ensures an understanding of what risks an organization or government faces. In a highly financialized global economy, those with the data can always benefit, regardless of the outcome.

Why risk? Consider the risk to an advertiser of wasting money on adverts that don't reach audiences. Big Data solves that risk. Consider the risk to an insurance company of losing money on a customer with poor driving habits, or poor health history. Big Data is solving that risk.

From climate change to credit card debt, Big Data is helping big companies to mitigate and control risk, and the cost of all that mitigation is placed on the people whose data is being used. If big businesses are saving money, that generally means that small businesses and individuals are getting squeezed.

Big corporations and investors experience more growth and less risk, and you pay for it.

So while you are often told that the dangers of big data are to your privacy, what we're telling you is that the driving force behind that danger is an obsession with risk.

The power of access

This is why the power to control access to data cannot be underestimated. Access to data is essential to assess risk in our technological world. The risk of losing customers. The risk of climate change. And the risk of liability.

If we view the growing world of data and automation now available to big companies as a kind of risk to individuals and small businesses, then we can see that the big data revolution will, if unchecked, tend to benefit the already powerful instead of the individual members of society who will experience the consequences.

Although ordinary people generate data, those who aggregate and use that data are the ones benefiting from it. And if you're skeptical about the value of the data you produce as an individual: pay attention to the behavior of those who work in the tech industry and consequently understand that value more concretely. Why is data security so important to people who work with big data every day?

Ultimately, this realization has led our industry to overestimate its own true value to society. The data is valuable in an economic sense, but that doesn't necessarily mean that the products that use that data are a net good for society. If what the big data revolution has affected is a wealth transfer from the little guy to the big investor and mutual fund, then is it really good at all?

"Bottom up, not boardroom down."

The tech industry is full of people who fervently believe that if they were in charge, the world would be a better place. Yet as the tech industry grows, is the world becoming a better place? Is not inequality and division growing?

More technology may not be enough. Progress must be shared, controlled, and understood by the least empowered. If we want to give ordinary people a chance to compete and to dictate their own futures, then we must put as much data and as many of those advanced tools into their hands as we are able.

The world of Star Trek may not be utopian because of computers, but notice one thing: ordinary people are the ones who benefit the most from the help of the computers on Star Trek. They are not used as a means of mass behavior control, or propaganda, or enslavement. They are used primarily to enable people to accomplish shared prosperity.

This is what we aim and fervently hope to do. To create a technology that is empowering from the bottom up. Not from the boardroom down.

People are the Answer

We believe that people are the solution to our shared problems. Technology made to be open and accessible is the kind that can be used to do the most good. But right now, the cutting edge of technology isn't doing much for ordinary people. The power of big data is not being distributed via our industry. Instead, it's being concentrated.

We can change that. And we should. 

Ai automation is no different. For all the productivity growth since 1980, little of the economic gains have gone to workers. We're talking about fast food workers, taxi drivers, logistical workers, and other so-called "low-skill" professions. These professions have become highly dependent on automation. But ordinary workers find their bargaining power shrinking and their workloads increasing. As our collective ability to do things grows, the economic possibilities of workers seem to shrink.

Shared prosperity, shared stability

It doesn't have to be this way. Year on year, there's no reason why workers shouldn't be paid more. There's no reason why wealth should continue to be concentrated as our economy grows. Why does the S&P 500 grow reliably in value, year after year, while real wages stagnate?

And why is the fact that they are stagnating not seen as a problem? In fact, such a state of affairs is a huge risk, not just to any individual company, but to the planet. Shared prosperity is shared stability as well. 

Our industry is failing ordinary people. It's failing to acknowledge that to "disrupt" and transform someone's work comes with it a large responsibility to be aware of how that disruption alters people's lives and affects their livelihoods. It causes people's employment to grow more precarious, and drives them to work longer hours, with little productivity gain to show for it.

Again, that's not just inefficient: it's a systemic risk.

Automation cannot be synonymous with exploitation on a massive scale. That state of affairs should strike all of us as a failure. For whom are we changing this world?

The answer should be easy: for people. For ordinary people, just like us. 

Putting People Back in Control

This is all a long way of getting to this point: doFlo is founded on the idea that technology and data should be something that doesn’t just benefit the public but is owned by the public. That the technology already exists today to make your life far less stressful and depressing, and that it is in your interest to have direct control of how that technology is used.

Moreover, it is our mission to support the disempowered and the disenfranchised in all that we do. We believe that isn't only the right thing to do — it's the smart thing to do.

Let’s make technology that doesn’t replace people, but enables them. Let’s do automation that inspires people, not that they have to fear. And let’s create prosperity for regular people, not out of regular people.

Let's acknowledge that all workers have important skills, and let’s give them more abilities, and make them each more valuable. It just may be that the workers themselves have some of the best ideas on how to make their own lives better. That's often the case. 

This outlook is not just a part of our marketing, but also a part of our strategy. We are targeting small proprietors and individuals who are self-employed. This is simply because we believe that this is where the need is greatest, and where we can do the most good.

In our industry, the watchword is always “disruption”, but disruption usually comes at a price for the disempowered. We want to create a technology platform that puts control of the future into as many hands as possible, with faith in the idea that regular people know what they themselves want better than anyone else.

A safe, happy, wealthy society of free people is rich in ways you can't buy. 

Our Core Ideals:

To make a concrete commitment to this ideal and to the points I’ve raised in this post, we once again propose the following:

  • Serve people over systems

  • Ask: “For whom is this making the world a better place?”

  • Our employees are people first, and their lives are more important than their jobs (no grindset!)

  • To make our technology available to as many people as possible. At all times, consider the customer’s ability to pay, and not just their willingness to do so.


  • The data that our users generate should belong to them. Our users are not our “product.”

 Thumbs up: Let's do this.

Well, what say you? Does our set of core ideals speak to you? Want to join us in our mission, whatever joining means to you? E-mail our CEO Will.Butler@doflo.com, and tell him about it. Until then: we'll see you in the comment section. 


Copyright 2024 © doFlo Inc.

Copyright 2024 © doFlo Inc.