That is quite an Indian corporate argument.
One should "reinvent the wheel" in this case mainly because it is intellectually rewarding but also because very few people have achieved it and also one has something to contribute to the world.
Well corporate interests drive industry , companies need to be secure in there cash flow and the do that by holding back until absolutely necessary. You think they cant add more cores to existing designs ?
Does adding more cores make stuff faster? , not if the software hasten leveraged it in there development.
A large part of software even now run on the main thread.
Innovation for the sake of innovation is not organic or cost-effective.
In my own processor design project it took me a few years to simplify the processor instructions and the I/O hardware specification. I still have to design the GPU which I think will create need in the general-purpose cores for more instructions. I am learning as I am proceeding.
whatever you are doing on your own , is a non silicon, theoretical PD scenario or a large electrical bread board demonstrator, I am assuming.
What I won't be doing is porting C language to the assembly instructions because the assembly will be simple enough.
What I also won't be doing is porting Linux to the processor. I will be writing a microkernel-based OS, for which I have already specified some system calls.
As for app ecosystem my plan is to write a virtual machine for x86.
Well you are just skipping having to write a C compiler , Im assuming , which tokenizes the tasks , converts to instruction sets/assembly and delegates to the OS drivers.
Now since your machine will be unique, so the drivers and OS interface will have to be unique too since you are not going with the standards.
I think is what you are trying to say.
Is this funded ? or your own R&D?
From what little I know of quantum computing, the challenge seems to be miniaturizing the hardware and getting it to run at room temperature.
So I think for the next ten years quantum computing will be available as a cloud service.
Well miniaturization is much later, the current challenge is computation robustness.
if you are dealing with quantum states , the states are finicky and will have issues with holding the info correctly and prone to absorbing entropy from external factors. This makes the storage and processing not robust , to reduces overall entropy they are put in extreme cold states.
Simple analogy
Current arch :
You want is 2+2 =4(since all the bits will hold charge robustly ,especial when dealing with attenuation and noise, local and ambient.)
Quantum Arch.
Not 2+2 =5,(when external factors incite a Qbit to change states especially when trying to read it. Invoke mister Heisenberg)
The Robustness, you enjoy with the current arch , with respect to the issues with cache misses , signal latency , and context isolation to name a very few , was made possible over decades by thousands of people across domains.