What's new

ICube UPU, the next step in processor evolution?

The CPU and GPU will still be distinct on the AMD Fusion. ICube UPU integrates the CPU/CPU into a homogeneous core with shared registers, instruction set, etc. That's why an entire support ecosystem needs to be built from scratch while AMD already has it. The ICube UPU looks like it's best chance of success lies with the smart phone & tablet markets. Supposedly they have ported Android over and are now working on hardware support with some mobile partners.

I read somewhere that the ICube is massively scalable and more suitable for parallel computing than all existing processors. So, its efficiency does not drop off as severely as other processors the more you use, as in you could put thousands of these cores together and they would behave efficiently. This would suggest that it would be great for cloud computing and supercomputers. Sounds alot like tile nodes on a single enormous shared cache with crossbar interconnect which China is already working on with the Godson-T processor. Seems like hype with the limited information but interesting. If there's any truth to this, then I think there's more to the ICube processor than its mobile application.

Depends on how well it is implemented on scale.
A shared cache does bring its own headaches does it not? The memory controller is not going anywhere and it will bring its own latency issues.
 
Depends on how well it is implemented on scale.
A shared cache does bring its own headaches does it not? The memory controller is not going anywhere and it will bring its own latency issues.
I think the main reason for the singular L2 cache is to simplify scalability for the absolute number of cores, despite the performance hit. That's the only reason I could see for this sort of design. Otherwise, it would have to have something similar to the interleaved L2 caches of the Godson-T. There isn't alot of detail about the inner workings yet but I'll bet there is something similar to the Godson-T's Basic (local cache access) and Professional (global cache access) modes when deciding whether to access the shared L2 cache. This would go a long way towards reducing the latency. For the foreseeable future, I'm pretty sure the ICube UPU won't be coming out with a many-core version until it achieves some success in the mobile market with Android. Its architecture seems geared towards easy programmability at the expense of maximum performance. I wouldn't be surprised if they are seriously thinking of making a many-core version of it eventually for massively parallel supercomputers....as in for supercomputers for national cloud computing infrastructure. If this were the case, then an entire ecosystem of relatively easily programmed commercial software, that had their origin on the mobile side (Android), could be easily modified to run on this cloud infrastructure without fear of piracy to give China's domestic software industry a humongous boost. There is afterall currently a large effort towards cloud computing in China.
 
You are more current than I am with the Godson-T status. Do you have links referencing the 64-core and simulation of the prototype capable of 1000+ threads? Pretty impressive progress! Last time I read up about the Godson-T, was around 9 months ago and at that time, they were still doing simulations on the 16-core variant. Congrats to the developers, impressive.

64 core godson-t sim
*ttp://cacs.usc.edu/education/cs653/Peng-GodsonT-UCHPC10.pdf
*ttp://cacs.usc.edu/education/cs653/GodsonT-UCHPC.pdf

sssal.jpg



hgjhgj.jpg


the 1000 threads cpu project was leaked at the last year hotchip conference. the chip is designed for next-generation data centers. i think eetimes confidential has a full detail info.
 
I think the main reason for the singular L2 cache is to simplify scalability for the absolute number of cores, despite the performance hit. That's the only reason I could see for this sort of design. Otherwise, it would have to have something similar to the interleaved L2 caches of the Godson-T. There isn't alot of detail about the inner workings yet but I'll bet there is something similar to the Godson-T's Basic (local cache access) and Professional (global cache access) modes when deciding whether to access the shared L2 cache. This would go a long way towards reducing the latency. For the foreseeable future, I'm pretty sure the ICube UPU won't be coming out with a many-core version until it achieves some success in the mobile market with Android. Its architecture seems geared towards easy programmability at the expense of maximum performance. I wouldn't be surprised if they are seriously thinking of making a many-core version of it eventually for massively parallel supercomputers....as in for supercomputers for national cloud computing infrastructure. If this were the case, then an entire ecosystem of relatively easily programmed commercial software, that had their origin on the mobile side (Android), could be easily modified to run on this cloud infrastructure without fear of piracy to give China's domestic software industry a humongous boost. There is afterall currently a large effort towards cloud computing in China.

I do see some potential for superscaler UPU's on the military front as well.
Current Chinese avionics(and related equipment) is fairly distributed with GPPs governing various DSP's and other processing elements. A recent discussion with a colleague of mine from China at work hinted at a move towards Common integrated processors(ala F-22) for their processing needs. Perhaps the UPU(or its similes) may find themselves another market apart from commercial mobile device applications.
 
I do see some potential for superscaler UPU's on the military front as well.
Current Chinese avionics(and related equipment) is fairly distributed with GPPs governing various DSP's and other processing elements. A recent discussion with a colleague of mine from China at work hinted at a move towards Common integrated processors(ala F-22) for their processing needs. Perhaps the UPU(or its similes) may find themselves another market apart from commercial mobile device applications.
Considering the potential of the UPU, how effective do you think multi-modal radars would be as radar stealth killers once supercomputer capability is available in the size of a stove? My understanding is that the biggest hurdle to these kinds of radars is the required real-time computing power because as detection threshold is lowered, the computing power rises exponentially. What IF?...sometime in the future multi-model radars not only had the computational power to sift through -30dB at X-Band of clutter in real time but were also datalinked to multi-modal ground stations with the ground stations using L-Band and the air assets using X-Band to form a sort of hybrid radar frequency multi-modal radar detection grid ala. Aegis style? That would make for some interesting BVR battles, especially once missile carrying UAV "wingmen" are available. :)

How successful will the commercial release of the ICube UPU is I think will depend on how much direct government subsidy is received by any participating Chinese mobile partners. That is unless these undercut on price so severely that we're talking about 10" tablets, modified to have more PC capabilities, costing the equivalent of $75 and potentially competing for the U.N. contracts for the "laptops" for every child movement.
 
Considering the potential of the UPU, how effective do you think multi-modal radars would be as radar stealth killers once supercomputer capability is available in the size of a stove? My understanding is that the biggest hurdle to these kinds of radars is the required real-time computing power because as detection threshold is lowered, the computing power rises exponentially. What IF?...sometime in the future multi-model radars not only had the computational power to sift through -30dB at X-Band of clutter in real time but were also datalinked to multi-modal ground stations with the ground stations using L-Band and the air assets using X-Band to form a sort of hybrid radar frequency multi-modal radar detection grid ala. Aegis style? That would make for some interesting BVR battles, especially once missile carrying UAV "wingmen" are available. :)

How successful will the commercial release of the ICube UPU is I think will depend on how much direct government subsidy is received by any participating Chinese mobile partners. That is unless these undercut on price so severely that we're talking about tablets, modified to have more PC capabilities, costing the equivalent of $50 and potentially competing for the U.N. contracts for the "laptops" for every child movement.

Current Multi-core DSP's by TI peak out at 2 Ghz per core at 4 cores each..
That gives them the ability to perform complex ops fairly quick..
However these DSP's still need a GPP to govern them and in common usage operations such as standard filters are sometimes left to FPGAs. The latency naturally increase when you have different components with varying throughput each with their own memory. Even in the case of shared mem such as Ti's DaVinci systems the delay is noticeable in data transfer over serial ports between DSP and FPGA even with the ARM present.
I still consider the APG-77 a better radar than the APG-81 due to the F-22's CIP. The ability of the CIP to "allot" resources to the APG-77 when needed gives it a massive advantage over other radars as the CIP takes non-critical threads offline from the pipeline and allots more threads to the APG-77.. That is scalability for you right there.
Considering that only 70% of the CIP is being used and that there are two CIP's.. the APG-77 still has a large reserve of signal processing power available to it limited perhaps only by the bus width.
Yet the CIP is old.. based on the Cray.
Something like the UPU would not only be fast.. something like the ability for common slots to hold arrays of UPU's would potentially allow excellent serviceability. Just as the F-35 uses "apps" to run its SDR and other systems.. the UPU would simply take out the need for different systems and their integration issues. Any system from the radar , the radio.. or even the lights could be controlled by a bank of UPU's.. a super-flightcomputer. While the radar goes into standby.. UPUs get freed up and power management could be used to reduce heat and save life on potential cooling systems.
In combat non-essential apps can be taken offline from the shared cache.. (perhaps into a DDR) and the L2 Cache be mostly dedicated to radar variables and data.

On the price issue, the Chinese may even offer these UPU's for a lower price..
Initial batches will be .. "buggy" due to the need to churn out UPU's... but improvements will follow(personal exp with a Chinese licence built version of a Ti-C6000).
 
Current Multi-core DSP's by TI peak out at 2 Ghz per core at 4 cores each..
That gives them the ability to perform complex ops fairly quick..
However these DSP's still need a GPP to govern them and in common usage operations such as standard filters are sometimes left to FPGAs. The latency naturally increase when you have different components with varying throughput each with their own memory. Even in the case of shared mem such as Ti's DaVinci systems the delay is noticeable in data transfer over serial ports between DSP and FPGA even with the ARM present.
Good points but I was thinking more along the lines of reducing the detection range against a LO aircraft to <30km. If the detection threshold is reduced enough to make this at least potentially possible, the clutter rejection will become so overwhelming that only a veritable supercomputer would be needed. I think the ICube architecture makes this potentially possible because of the way you can integrate the whole package together with hundreds of relatively powerful cores. How scalable this is would probably hit bandwidth bottlenecks once processing required multiple many-core chips, but it would definitely reduce detection range before then. Whether it is advisable to be illuminating an opposing LO aircraft is another topic which is why I mentioned multi-modal radar or at the least data-linking different radar installations (AWACs, UAV radar+missiles) to do the actual detection and tracking/attack while the LO manned fighter would sit back in silence as the backup attacker.


Something like the UPU would not only be fast.. something like the ability for common slots to hold arrays of UPU's would potentially allow excellent serviceability. Just as the F-35 uses "apps" to run its SDR and other systems.. the UPU would simply take out the need for different systems and their integration issues. Any system from the radar , the radio.. or even the lights could be controlled by a bank of UPU's.. a super-flightcomputer. While the radar goes into standby.. UPUs get freed up and power management could be used to reduce heat and save life on potential cooling systems.
In combat non-essential apps can be taken offline from the shared cache.. (perhaps into a DDR) and the L2 Cache be mostly dedicated to radar variables and data.
Is there any reasons you can think of why the US hasn't already created something along these lines. The closest thing to this description is the APG-77 CIP but that's using an awfully underpowered processor considering what is possible.


On the price issue, the Chinese may even offer these UPU's for a lower price..
Initial batches will be .. "buggy" due to the need to churn out UPU's... but improvements will follow(personal exp with a Chinese licence built version of a Ti-C6000).
I'm thinking of buying one of those no name brand made in China smartphones that support Java so I can trade my stocks (trading platform is on Java) on the go. It hurts to think how many times I missed a good trade because I wasn't at my desktop every freaking second of an up or downtick. I've been shorting gold/silver mining stocks on the expectation of PIG sovereign debt problems over the last 6 months and I can't remember when there's been an easier more predictable time to make easy money.
 
An interesting discussion, everyone! even I'm an outsider of the industry I can feel the heat...



You do know that TSMC, UMC and Globalfoundries are simply pure play fabs that relies on imported machines for making ICs right? It is the same case with SMIC. Even Samsung, which does most of its design in-house, requires equipment from elsewhere. ASML, Nikon and Canon dominate the market for such machines, despite not being IC manufacturers themselves.


Ah ASML? I don't want to be a suspect of industrial espionage, but I do know the CFO of ASML, and its ex-CEO and ex-CFO :cheers: , heck, if I go thru my pal lists of all my schools, I am almost certain that I'll find some of them currently working at key positions for ASML...Netherlands is a small place you know but seriously, I just went to ASML shareholder meeting at a hotel here not long ago, what impressed me was not some machines, but its free pack of gifts, salmon cheese sandwiches aside:tongue: , including a fashionable ASML ultra thin dark blue pen that I am using now for formal occasions. :tup:
 
Good points but I was thinking more along the lines of reducing the detection range against a LO aircraft to <30km. If the detection threshold is reduced enough to make this at least potentially possible, the clutter rejection will become so overwhelming that only a veritable supercomputer would be needed.
First...The clutter rejection threshold is arbitrary.

Second...The lower this threshold, the greater the amount of radar returns that MUST be processed. There are no options. Otherwise, why is there a clutter rejection threshold in the first place?

Third...Above the clutter rejection threshold, there is no such thing as 'clutter'. So this statement: 'the clutter rejection will become so overwhelming that only a veritable supercomputer would be needed.' Is categorically wrong. Put it another way, above the rejection threshold, there are targets or returns, and below the threshold, there are clutter. So the better statement should be: 'If the detection threshold is reduced enough to make this at least potentially possible, the amount of radar returns will become so overwhelming that only a veritable supercomputer would be needed.'

Finally...How do we determine or declare what is not or IS a valid target? By detecting certain radar returns' characteristics, such as Doppler, scintillation, clustering, and amplitude of radar returns ABOVE the rejection threshold. Put it another way: EVERY SINGLE radar return above the rejection threshold must be processed to search for PATTERNS.

Once a pattern is discerned through filters, with CLUSTERING the most prominent of characteristics, a 'target' is finally declared and tracked over a space/time continuum. It sounds 'Star Trek-ky' but that is exactly what it is...

WAVEFORM DIVERSITY FOR ADAPTIVE RADAR: AN EXPERT SYSTEM APPROACH | Mendeley
The focus of this paper is on the design of waveforms for efficient spatial and temporal adaptivity including the focusing of the transmitted energy on a physically-small target, so that it will be possible to synthesize waveforms in the space-time continuum for both monostatic and bistatic applications and so that they can be mission adaptive.

IEEE Xplore - Radar target recognition based on fuzzy clustering
Radar target recognition based on fuzzy clustering

Jiang Jing; Wang Shouyong; Yu Lan; Zuo Delin; Yang Zhaoming; Tang Changwen;
Key Lab., Air Force Radar Acad., Wu Han

A new method of recognizing the aircraft number of a radar target from a narrowband IF signal of non-coherent radar is presented. According to the received narrowband IF echo signal,...
And take note of the authors' names lest I be accused of using 'biased' sources.

Also note what I said: Detect THEN Track.

This order is very very very important and is usually phrased as 'Detect BEFORE Track'.

The problem with processing a lot of radar returns in trying to filter out an F-117 class body is indeed processing power. With the standard clutter rejection threshold, as much as we can agree on what is the 'standard' in a military setting, an F-117 class body is IMMEDIATELY dismissed by 99% of the world's military radars based upon TWO of the most prominent radar return characteristics: Clustering and Amplitude.

So here is the problem for processing power: The lower the clutter rejection threshold, and remember that above this threshold there are no such thing as 'clutter' but only radar returns, the more are there of radar returns that have the same characteristics that were originally dismissed: Clustering and Amplitude. Put it another way: Above this rejection threshold, there will be plenty enough of clusters with certain amplitude to keep the seeker busy for hours, if not days, in trying to figure out what EACH cluster of them all -- is. Which lead back to why we created the clutter rejection threshold in the first place.

Detect before Track.

But since we lowered the clutter rejection threshold in trying to process (or filter) out an F-117 class body and start seeing all these clusters of returns that will keep us busy for hours if not days, we must use some other radar target characteristics BESIDES clustering and amplitude. Since clustering and amplitude can be measured independent of time, how about we use the speed and may be the Doppler component of EVERY cluster to try and find an F-117 class body?

Track before detect.

Track-before-detect - Wikipedia, the free encyclopedia
In radar technology and similar fields, track-before-detect (TBD) is a concept according to which a signal is tracked before declaring it a target. In this approach, the sensor data about a tentative target are integrated over time and may yield detection in cases when signals from any particular time instance are too weak against clutter (low signal-to-noise ratio) to register a detected target.

In other words, if there are 100 clusters of radar returns, ALL of them must be continuously processed over time before the system can declare one or more is/are valid target(s). So what this means is that at some level of lowering the rejection threshold, we have no choice but to filter out not only clustering and amplitude but just about as much as possible of the known radar return's characteristics to try to find an F-117 class body. If there are 1,000 or 10,000 clusters of returns, ALL of them must pass through these filters.

Here are why/what make the F-22 and F-35 so far unique in being threats and how to process whatever radar returns they may find: The opposition must be active in trying to find them, in doing so, they can afford to be 'stingy' with their active radars because the opposition is already making its presence known. Then with their superior radar technology, they are able to make the opposition stand out even more against the background, which in turn lead to superior target resolution for the missiles with minimal active radar scan.
 
First...The clutter rejection threshold is arbitrary.
Clutter rejection is largely a function of the signal processing filter of which you mention a few such examples yourself.


Second...The lower this threshold, the greater the amount of radar returns that MUST be processed. There are no options. Otherwise, why is there a clutter rejection threshold in the first place?
Correct, that's why the lower threshold would make processing overwhelming and need a supercomputer.


Third...Above the clutter rejection threshold, there is no such thing as 'clutter'. So this statement: 'the clutter rejection will become so overwhelming that only a veritable supercomputer would be needed.' Is categorically wrong. Put it another way, above the rejection threshold, there are targets or returns, and below the threshold, there are clutter. So the better statement should be: 'If the detection threshold is reduced enough to make this at least potentially possible, the amount of radar returns will become so overwhelming that only a veritable supercomputer would be needed.'
My use of the word "clutter" was obviously in reference to radar returns that are processed and found to not be targets, aka. processed radar returns to be ignored. Why else would I refer to the need for a supercomputer to wade through this "clutter"?


The problem with processing a lot of radar returns in trying to filter out an F-117 class body is indeed processing power.
Correct, that's why a supercomputer could potentially improve LO aircraft detection range.


In other words, if there are 100 clusters of radar returns, ALL of them must be continuously processed over time before the system can declare one or more is/are valid target(s). So what this means is that at some level of lowering the rejection threshold, we have no choice but to filter out not only clustering and amplitude but just about as much as possible of the known radar return's characteristics to try to find an F-117 class body. If there are 1,000 or 10,000 clusters of returns, ALL of them must pass through these filters.
Bottom line is, the more processing power is available the more easily a LO aircraft could be detected.


Here are why/what make the F-22 and F-35 so far unique in being threats and how to process whatever radar returns they may find: The opposition must be active in trying to find them, in doing so, they can afford to be 'stingy' with their active radars because the opposition is already making its presence known. Then with their superior radar technology, they are able to make the opposition stand out even more against the background, which in turn lead to superior target resolution for the missiles with minimal active radar scan.

Boom...!!!

No more opposition. Or should we say: No more J-20? :lol:
As I said earlier, the manned fighter could sit back silently as the backup attacker while interlinked UAVs did the target illumination and attack in conjunction with the radar from AWACS and/or onboard the UAV or perhaps a multi-modal radar.
 
Clutter rejection is largely a function of the signal processing filter of which you mention a few such examples yourself.

Correct, that's why the lower threshold would make processing overwhelming and need a supercomputer.

My use of the word "clutter" was obviously in reference to radar returns that are processed and found to not be targets, aka. processed radar returns to be ignored. Why else would I refer to the need for a supercomputer to wade through this "clutter"?
Even if we grant you this latitude, it would be conditional, here is why...

A priori and a posteriori - Wikipedia, the free encyclopedia
The phrases "a priori" and "a posteriori" are Latin for "from what comes before" and "from what comes later"...

1- Say you are told to look for a diamond ring in a playroom. Immediately you are loaded with the relevant a priori knowledge about 'toy', 'diamond', and 'ring'. When you enter the playroom, your vision is filled with toys and you know that you are safe in mentally discarding toys to search for a diamond ring. The patterns for what is a 'toy' and a 'diamond ring' are known to you.

2- Say you are told to look for an object that does not belong in a playroom. Immediately you are loaded with relevant a priori knowledge of what is a 'toy' but nothing about this object. Your problem is now much more difficult in magnitudes. Even though you have the a priori knowledge of what constitute objects that are of entertainment nature, each object slightly differs from its companion in this room. Were you are given any a priori knowledge of the DEGREE of difference(s) of this unknown object against the toys? The less a priori knowledge you have of the degree of difference(s), the greater the burden upon you to inspect, catalog the physical characteristics of EACH toy, correlate what are the same, correlate what are similar, discard the correlated, and finally discern the object that supposedly does not belong in a playroom. If anything, it may be a diamond ring but you would not know what it is anyway. All you know is that based upon posteriori knowledge after all that work, you finally found this <something> that does not belong.

3- Say you are told to find an object that does not belong in a room. Not a 'playroom' but simply a room. As the Americans would say: Now you are in a world of sh!t. You have no a priori knowledge of anything in this room. You do not recognize what each item is and what it does. For all you know, the first thing you discarded was the diamond ring, except you did not know what it was and how valuable it is.

Situation 2 gave you an advantage in that after a certain amount of time and physical structures you inspected, if you come across the diamond ring, its physical structures and characteristics would be sufficiently at odds with the posteriori knowledge you have gathered that you can reasonably be assured that you have found the object that does not belong.

Situation 3, the diamond ring may be the first or 10th or 20th object you inspected, memorized, and discarded. Only after a while when you have finally gathered enough posteriori knowledge of what is a 'toy', although you may not call it that, can your memory triggered you that you have discarded the 1st, or 10th, or 20th item that have gross physical structural differences from the 'toys'. Now you have to return to the spot where you discarded the diamond ring and examine it again. Note: The word 'discard' here does not mean thrown away, simply mean not targeted or focused upon.

In situation 1, the clutter rejection threshold is high and can be safely set as high based upon a priori knowledge of BOTH types of objects.

In situation 2, even though you have no a priori knowledge of one type of object, you have plenty of a priori knowledge of the items in the room so while you can categorize them all as 'clutter' you cannot really discard them below any threshold because you still need to examine each to see if you can find something -- anything -- that does not belong. You may find the diamond ring under the 5th or 10th object you overturned.

An F-117 class body belongs in situations 2 and 3 as THE unknown object or a type of object that must be studied before any categorization.

One may argue/ask: How can that be when the F-117 is still made up of metal and have visually recognizable structures that cry out 'aircraft'?

The problem and counterargument is that radar detection is based upon feedback, as in real time target feedback, and they are passed through filters that contains KNOWN factors that make up 'aircraft'. To put it another way, plenty of posteriori knowledge is carried by each radar system in order to form criteria of a priori knowledge that the system can apply against any target in real time.

We have no posteriori knowledge of low radar observable aircrafts and how they generally look under radar bombardment. No current radar system carries that posteriori knowledge to apply an a priori criteria in real time against an unknown body buried in what used to be rejected.

When I say 'we' I mean in general principle and do not include US. Am sure you and the more intelligent people here will understand why. :azn:

The Chinese fellas may have mocked what I say about clutter but that is because they are ignorant of the fact that radar detection is based upon information theory. Not 'highly' or 'quite' but simple IS based upon information theory. Without it, there would be no radar detection at all. There are plenty of retired engineers who lives comfortably on their houseboats in Florida who spent their entire careers on practically nothing but clutter processing.

Clutter is an enormous challenge in radar detection. In the air, insects and birds are known as 'volumetric' RCS, meaning a flock of birds that created an RCS can do so only if all birds are sufficiently close to each other where radar signals that bounced off each bird interacts with its companions and this constructive interference or amplification create an RCS. If one bird depart, the radar will not pick up this bird and the flock's RCS decreases a little. Same for insects or hydrometeors, fancy word for rain and snow...

Ruoskanen: Tools and Methods for Radar Performance Evaluation and Enhancement, ISBN 951-22-8144-9
For example, for rain clutter volumetric backscattering...

Birds and insects have been known to fool radars as 'angels'...

Detection of Bird Migration by Centimetric Radar-- a Cause of Radar 'Angels'
High-power centimetric radar at times records random scatterings and occasionally dense displays of small blobs of echo, which have been called radar 'angels'.

It is shown that these properties can be satisfactorily explained on the assumption that the echoes are received from birds on migration.

On the ground, vegetation from fields of grass to lush forests create their own volumetric RCS except that their characteristics are predictable, such as swaying in the wind, and that rhythmic motion can be factored in as a discriminant.

Man made clutter such as urban environment are particularly hostile to airborne radars but friendly to 'stealth'.

See this document...

www.ifp.uni-stuttgart.de/publications/phowo07/180Stilla.pdf

radar_sar_imaging_urban.jpg


Figures 2-9 illustrate the difference between a photograph of an urban environment versus a radar derived imaging. For the radar, the urban environment is filled with 'flashes' and figure 12 explains why: Corner reflectors. That structure have been discussed here before. Any airborne radar looking down at a city would most likely miss a flight of F-35s even if they flew across his radar view.

Correct, that's why a supercomputer could potentially improve LO aircraft detection range.

Bottom line is, the more processing power is available the more easily a LO aircraft could be detected.

As I said earlier, the manned fighter could sit back silently as the backup attacker while interlinked UAVs did the target illumination and attack in conjunction with the radar from AWACS and/or onboard the UAV or perhaps a multi-modal radar.
There is another problem regarding clutter when dealing with 'stealth' aircrafts and that is the signal-to-clutter (SCR) ratio...

Filters for Detection of Small Radar Signals in Clutter | Browse Journal - Journal of Applied Physics
Radar clutter is distinguished from thermal noise by being caused by random reflection of transmitted electromagnetic energy.

Defining the signal&#8208;to&#8208;clutter ratio as the ratio of the peak signal to the rms value of the clutter, the optimum linear filter is derived for enhancing this ratio.

IEEE Xplore - CMP Antenna Array GPR and Signal-to-Clutter Ratio Improvement
CMP Antenna Array GPR and Signal-to-Clutter Ratio Improvement

Xuan Feng; Sato, M.; Yan Zhang; Cai Liu; Fusheng Shi; Yonghui Zhao; Coll. of Geoexploration Sci. & Technol., Jilin Univ., Changchun

Ground-penetrating radar (GPR) is recognized as a promising sensor for detecting buried landmines.

The result shows the signal-to-clutter ratio was dramatically improved.
For the Chinese boys here, once again do note the names of my second source lest you accuse me of using 'biased' sources.

Anyway, clutter is distinct from noise in that noise is internally generated while clutter is based upon external response. Clutter varies with range and direction. Noise is constant except for temperature. Clutter + Noise = Total interference. Somewhere in Clutter is a bunch of F-22s and F-35s. If the radar's avionics is crappy enough its own noise can be greater than clutter, although that possibility is quite remote today. Noise, if recognized, will be suppressed but if clutter level is the same as noise, then both will be suppressed, so the goal is to have avionics that are lower in internal noise than from external clutter. If not possible, then noise must not be suppressed but factored in into the filters, making said filters more complex and increase odds of missing a target.

So against low radar observable targets, two things are important: A flying supercomputer and previous knowledge of what a 'stealth' target look like. The US have both.
 

Latest posts

Pakistan Affairs Latest Posts

Back
Top Bottom