What's new

Whatever

. . .
Me? Never.


giphy.gif
 
.
Nice man, keep me updated on the development. Do you use Cadence?

@Donatello This might interests you.

there is no prototype at present... we are trying to raise money to buy a xilinx fpga... we are a very small group... i have contacts with former senior government officers... but money is the issue. :sad:

do you want to participate?? we need hardware engineers... it is anyway a transnational socialist project, maybe you and donatello can contribute, if you want to... :-)
 
.
there is no prototype at present... we are trying to raise money to buy a xilinx fpga... we are a very small group... i have contacts with former senior government officers... but money is the issue. :sad:

do you want to participate?? we need hardware engineers... it is anyway a transnational socialist project, maybe you and donatello can contribute, if you want to... :-)

Sure man, i'll be in India by this mid December, will talk more about it then.

I'm not a pro in VLSI, i'm good in device fabrication and device physics.
 
. . .
Just submitted a mini project on 10T SRAM 512 Bit with sleep transistors (power gating), for reduced dynamic and leakage power. :D

what coincidence... :-)

i have one question though... it may sound silly to you as an engineer, but i thought sram generally have six transistors per bit... is your device an advancement??
 
.
what coincidence... :-)

i have one question though... it may sound silly to you as an engineer, but i thought sram generally have six transistors per bit... is your device an advancement??

No, it's not an advancement. Even when the transistor is off '0' for NMOS or '1' for PMOS, for the inverter latch in the basic SRAM circuit, still there will be minor current flow from source to drain, leakage current. With two NMOS and PMOS, you're completely cutting off the leakage current. The dynamic power for a single SRAM cell with 10T is 4 times less than with 6T.

But the circuit exhibits less stability, than a 6T SRAM. Because of some time constraint, we couldn't develop it further for more reduction in leakage power. Will continue with it in the next semester.
 
.
No, it's not an advancement. Even when the transistor is off '0' for NMOS or '1' for PMOS, for the inverter latch in the basic SRAM circuit, still there will be minor current flow from source to drain, leakage current. With two NMOS and PMOS, you're completely cutting off the leakage current. The dynamic power for a single SRAM cell with 10T is 4 times less than with 6T.

But the circuit exhibits less stability, than a 6T SRAM. Because of some time constraint, we couldn't develop it further for more reduction in leakage power. Will continue with it in the next semester.

i sense your design to increase power-usage-reduction in a circuit already working with reduced power consumption via clockless mechanism... so overall system will have reduced power consumption, i suppose... though the extra four transistors will have extra circuit space, which will have to be balanced with other reductions... am i correct??
 
.
i sense your design to increase power-usage-reduction in a circuit already working with reduced power consumption via clockless mechanism... so overall system will have reduced power consumption, i suppose... though the extra four transistors will have extra circuit space, which will have to be balanced with other reductions... am i correct??

Ironically, as i said because of time constraint, we didn't considered the condition of IDLE state of the SRAM cell, so that contributed to leakage power of the circuit. This time, i'm considering a dynamic threshold voltage through body biasing, as it'd contribute to lower power circuit as well as better noise immunity.

The leakage power increases with increasing transistors, so i'm thinking about gating Vdd and Gnd for idle state.
 
.
@Skull and Bones

thanks for the info... lets see if we can have a meet when you come to india.

Without clock, the processor would be running really slow for a specific set of tasks, since it doesn't know when and how to honor an interrupt?

1. we have limited the number of user tasks that can be run on the processor... and then added os-level tasks which continuously monitor status bits in the very few i/o interfaces, and raise os-level interrupts which will be handled by user-level threads... this latter method is similar to the user-level "interrupt service threads" in qnx os.

2. the os is hybrid architecture ( mix of microkernel and monolithic ).

3. the i/o system has been designed around the processor, and not the other way around... for example, user-input - via stylus - is continuously monitored... so there is no high-load effect on the os.

these things and others have enabled removing the need for hardware interrupt pins and processor support for them.

i didn't understand which specific tasks would be slowed down.

Hmm, okay. Well, i understand the app craze since someone sitting on a cheap computer with good skills in C/C++/C# and or Java/PHP can create his own work and sell to millions. That's the smartphone world, and those who jump on the bandwagon first, always come out winners. That is why all you see now is some sort of modified Skype or Whatsapp or Viber....i mean, how many messengers does one really need? Or photo sharing sites?

so app-making is as non-intellectual as "creating" yet another distribution of linux... :)

So what work do you do? I didn't understand the clock less part, any links to your or your group's published literature? Would love to read and reflect.

in june i resigned from a media company to continue my portable computer project which i had started in 2008... the project includes the clock-less microprocessor... the project is part of a overall socialist project... but apologies, i can't share the names and more details of the project on this site, though you should look at the previous posts above, between me and "skull and bones".

i will ask webmaster temporary permission to send a private message to you and skull, and exchange our mail ids to continue discussion. :-) skull says that he would be coming to india mid-december... maybe i can meet him then and discuss more with you.

but what i also told him is that we are trying to raise money to buy a fpga board to demonstrate the processor's prototype.
 
.
@Skull and Bones

thanks for the info... lets see if we can have a meet when you come to india.



1. we have limited the number of user tasks that can be run on the processor... and then added os-level tasks which continuously monitor status bits in the very few i/o interfaces, and raise os-level interrupts which will be handled by user-level threads... this latter method is similar to the user-level "interrupt service threads" in qnx os.

2. the os is hybrid architecture ( mix of microkernel and monolithic ).

3. the i/o system has been designed around the processor, and not the other way around... for example, user-input - via stylus - is continuously monitored... so there is no high-load effect on the os.

these things and others have enabled removing the need for hardware interrupt pins and processor support for them.

i didn't understand which specific tasks would be slowed down.



so app-making is as non-intellectual as "creating" yet another distribution of linux... :)



in june i resigned from a media company to continue my portable computer project which i had started in 2008... the project includes the clock-less microprocessor... the project is part of a overall socialist project... but apologies, i can't share the names and more details of the project on this site, though you should look at the previous posts above, between me and "skull and bones".

i will ask webmaster temporary permission to send a private message to you and skull, and exchange our mail ids to continue discussion. :-) skull says that he would be coming to india mid-december... maybe i can meet him then and discuss more with you.

but what i also told him is that we are trying to raise money to buy a fpga board to demonstrate the processor's prototype.

Cool. What CAD are you using for designing the processor? Are you defining your own NMOS and PMOS or using pre-existing cells/libraries?

Sounds like an interesting project.
 
. .

Pakistan Defence Latest Posts

Back
Top Bottom