Discussion:
[Arm-netbook] libre 64-bit risc-v SoC
Luke Kenneth Casson Leighton
2017-04-27 11:21:08 UTC
Permalink
ok so it would seem that the huge amount of work going into RISC-V
means that it's on track to becoming a steamroller that will squash
proprietary SoCs, so i'm quite happy to make sure that it's
not-so-subtly nudged in the right direction.

i've started a page where i am keeping notes:
http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to
create a desirable mass-volume low-cost SoC, meaning that it will need
to at least do 1080p60 video decode and have 3D graphics capability.
oh... and be entirely libre.

the plan is:

* to create an absolute basic SoC, starting from lowRISC (64-bit),
ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a
low-cost proof-of-concept where mistakes can be iterated through
* provide the end-result to software developers so that they can have
actual real silicon to work with
* begin a first crowd-funding phase to create a 28nm (or better)
multi-core SMP SoC

for this first phase the interfaces that i've tracked down so far are
almost entirely from opencores.org, meaning that there really should
be absolutely no need to license any costly hard macros. that
*includes* a DDR3 controller (but does not include a DDR3 PHY, which
will need to be designed):

* DDR3 controller (not including PHY)
* lowRISC contains "minion cores" so can be soft-programmed to do any GPIO
* boot and debug through ZipCPU's UART (use an existing EC's on-board FLASH)
* OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
* OpenCores ULPI USB 2.0 controller
* OpenCores USB-OTG 1.1 PHY

note that there are NO ANALOG INTERFACES in that. this is *really*
important to avoid, because mixed analog and digital is incredibly
hard to get right. also note that things like HDMI, SATA, and even
ethernet are quite deliberately NOT on the list. Ethernet RMII (which
is digital) could be implemented in software using a minion core. the
advantage of using the opencores VGA (actually LCD) controller is: i
already have the full source for a *complete* linux driver.

I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be
software-programmed as bit-banging in the minion cores.

these interfaces, amazingly, are enough to do an SoC that, if put into
40nm, would easily compete with some of TI's offerings, as well as the
Allwinner R8 (aka A13).

i've also managed to get alliance and coriolis2 compiled on
debian/testing (took a while) so it *might* not be necessary even to
pay for the ASIC design tooling (the cost of which is insane).
coriolis2 includes a reasonable auto-router. i still have yet to go
through the tutorials to see how it works. for design rules: 90nm
design rules (stacks etc.) are actually publicly available, which
would potentially mean that a clock rate of at least 300mhz would be
achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm
geometry. 65 down to 40nm would be much more preferable but may be
hard to get.

graphics: i'm going through the list of people who have done GPUs (or
parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been
modified to "the text of the GPL license plus an additional clause
which is that if you want to use this for commercial purposes then...
you can't". which is *NOT* a GPL license, it's a proprietary
commercial license!

MIAOW is just an OpenCL engine but a stonking good one that's
compatible with AMD's software. nyuzi is an experimental GPU where i
hope its developer believes in its potential. ORGFX i am currently
evaluating but it looks pretty damn good, and i think it is slightly
underestimated. i could really use some help evaluating it properly.
my feeling is that a combination of MIAOW to handle shading and ORGFX
for the rendering would be a really powerful combination.

so.

it's basically doable. comments and ideas welcome, please do edit the
page to keep track of notes http://rhombus-tech.net/riscv/libre_riscv/

---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-n
Allan Mwenda
2017-04-28 07:36:27 UTC
Permalink
Yes. Do it. DO IT.
Post by Luke Kenneth Casson Leighton
ok so it would seem that the huge amount of work going into RISC-V
means that it's on track to becoming a steamroller that will squash
proprietary SoCs, so i'm quite happy to make sure that it's
not-so-subtly nudged in the right direction.
http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to
create a desirable mass-volume low-cost SoC, meaning that it will need
to at least do 1080p60 video decode and have 3D graphics capability.
oh... and be entirely libre.
* to create an absolute basic SoC, starting from lowRISC (64-bit),
ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a
low-cost proof-of-concept where mistakes can be iterated through
* provide the end-result to software developers so that they can have
actual real silicon to work with
* begin a first crowd-funding phase to create a 28nm (or better)
multi-core SMP SoC
for this first phase the interfaces that i've tracked down so far are
almost entirely from opencores.org, meaning that there really should
be absolutely no need to license any costly hard macros. that
*includes* a DDR3 controller (but does not include a DDR3 PHY, which
* DDR3 controller (not including PHY)
* lowRISC contains "minion cores" so can be soft-programmed to do any GPIO
* boot and debug through ZipCPU's UART (use an existing EC's on-board FLASH)
* OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
* OpenCores ULPI USB 2.0 controller
* OpenCores USB-OTG 1.1 PHY
note that there are NO ANALOG INTERFACES in that. this is *really*
important to avoid, because mixed analog and digital is incredibly
hard to get right. also note that things like HDMI, SATA, and even
ethernet are quite deliberately NOT on the list. Ethernet RMII (which
is digital) could be implemented in software using a minion core. the
advantage of using the opencores VGA (actually LCD) controller is: i
already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be
software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into
40nm, would easily compete with some of TI's offerings, as well as the
Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on
debian/testing (took a while) so it *might* not be necessary even to
pay for the ASIC design tooling (the cost of which is insane).
coriolis2 includes a reasonable auto-router. i still have yet to go
through the tutorials to see how it works. for design rules: 90nm
design rules (stacks etc.) are actually publicly available, which
would potentially mean that a clock rate of at least 300mhz would be
achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm
geometry. 65 down to 40nm would be much more preferable but may be
hard to get.
graphics: i'm going through the list of people who have done GPUs (or
parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been
modified to "the text of the GPL license plus an additional clause
which is that if you want to use this for commercial purposes then...
you can't". which is *NOT* a GPL license, it's a proprietary
commercial license!
MIAOW is just an OpenCL engine but a stonking good one that's
compatible with AMD's software. nyuzi is an experimental GPU where i
hope its developer believes in its potential. ORGFX i am currently
evaluating but it looks pretty damn good, and i think it is slightly
underestimated. i could really use some help evaluating it properly.
my feeling is that a combination of MIAOW to handle shading and ORGFX
for the rendering would be a really powerful combination.
so.
it's basically doable. comments and ideas welcome, please do edit the
page to keep track of notes http://rhombus-tech.net/riscv/libre_riscv/
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Pen-Yuan Hsing
2017-05-03 01:05:13 UTC
Permalink
I'm really really excited about a possible 100% libre RISC-V based computer.
Though I'm a backer of the most recent campaign (and can't wait to get it! :)), I lack the knowledge/skills to actually help with the technical development of this upcoming RISC card.
Is there anything I/we can do to help get this RISC initiative going?
Post by Allan Mwenda
Yes. Do it. DO IT.
ok so it would seem that the huge amount of work going into RISC-V
means that it's on track to becoming a steamroller that will squash
proprietary SoCs, so i'm quite happy to make sure that it's
not-so-subtly nudged in the right direction.
http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to
create a desirable mass-volume low-cost SoC, meaning that it will need
to at least do 1080p60 video decode and have 3D graphics capability.
oh... and be entirely libre.
* to create an absolute basic SoC, starting from lowRISC (64-bit),
ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a
low-cost proof-of-concept where mistakes can be iterated through
* provide the end-result to software developers so that they can have
actual real silicon to work with
* begin a first crowd-funding phase to create a 28nm (or better)
multi-core SMP SoC
for this first phase the interfaces that i've tracked down so far are
almost entirely fromopencores.org <http://opencores.org>, meaning that there really should
be absolutely no need to license any costly hard macros. that
*includes* a DDR3 controller (but does not include a DDR3 PHY, which
* DDR3 controller (not including PHY)
* lowRISC contains "minion cores" so can be soft-programmed to do any GPIO
* boot and debug through ZipCPU's UART (use an existing EC's on-board FLASH)
* OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
* OpenCores ULPI USB 2.0 controller
* OpenCores USB-OTG 1.1 PHY
note that there are NO ANALOG INTERFACES in that. this is *really*
important to avoid, because mixed analog and digital is incredibly
hard to get right. also note that things like HDMI, SATA, and even
ethernet are quite deliberately NOT on the list. Ethernet RMII (which
is digital) could be implemented in software using a minion core. the
advantage of using the opencores VGA (actually LCD) controller is: i
already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be
software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into
40nm, would easily compete with some of TI's offerings, as well as the
Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on
debian/testing (took a while) so it *might* not be necessary even to
pay for the ASIC design tooling (the cost of which is insane).
coriolis2 includes a reasonable auto-router. i still have yet to go
through the tutorials to see how it works. for design rules: 90nm
design rules (stacks etc.) are actually publicly available, which
would potentially mean that a clock rate of at least 300mhz would be
achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm
geometry. 65 down to 40nm would be much more preferable but may be
hard to get.
graphics: i'm going through the list of people who have done GPUs (or
parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been
modified to "the text of the GPL license plus an additional clause
which is that if you want to use this for commercial purposes then...
you can't". which is *NOT* a GPL license, it's a proprietary
commercial license!
MIAOW is just an OpenCL engine but a stonking good one that's
compatible with AMD's software. nyuzi is an experimental GPU where i
hope its developer believes in its potential. ORGFX i am currently
evaluating but it looks pretty damn good, and i think it is slightly
underestimated. i could really use some help evaluating it properly.
my feeling is that a combination of MIAOW to handle shading and ORGFX
for the rendering would be a really powerful combination.
so.
it's basically doable. comments and ideas welcome, please do edit the
page to keep track of noteshttp://rhombus-tech.net/riscv/libre_riscv/
_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm
John Luke Gibson
2017-05-03 01:52:38 UTC
Permalink
Post by Pen-Yuan Hsing
I'm really really excited about a possible 100% libre RISC-V based computer.
Though I'm a backer of the most recent campaign (and can't wait to get it!
:)), I lack the knowledge/skills to actually help with the technical
development of this upcoming RISC card.
Is there anything I/we can do to help get this RISC initiative going?
Post by Allan Mwenda
Yes. Do it. DO IT.
ok so it would seem that the huge amount of work going into RISC-V
means that it's on track to becoming a steamroller that will squash
proprietary SoCs, so i'm quite happy to make sure that it's
not-so-subtly nudged in the right direction.
http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to
create a desirable mass-volume low-cost SoC, meaning that it will need
to at least do 1080p60 video decode and have 3D graphics capability.
oh... and be entirely libre.
* to create an absolute basic SoC, starting from lowRISC (64-bit),
ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a
low-cost proof-of-concept where mistakes can be iterated through
* provide the end-result to software developers so that they can have
actual real silicon to work with
* begin a first crowd-funding phase to create a 28nm (or better)
multi-core SMP SoC
for this first phase the interfaces that i've tracked down so far are
almost entirely fromopencores.org <http://opencores.org>, meaning that
there really should
be absolutely no need to license any costly hard macros. that
*includes* a DDR3 controller (but does not include a DDR3 PHY, which
* DDR3 controller (not including PHY)
* lowRISC contains "minion cores" so can be soft-programmed to do any GPIO
* boot and debug through ZipCPU's UART (use an existing EC's on-board FLASH)
* OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
* OpenCores ULPI USB 2.0 controller
* OpenCores USB-OTG 1.1 PHY
note that there are NO ANALOG INTERFACES in that. this is *really*
important to avoid, because mixed analog and digital is incredibly
hard to get right. also note that things like HDMI, SATA, and even
ethernet are quite deliberately NOT on the list. Ethernet RMII (which
is digital) could be implemented in software using a minion core. the
advantage of using the opencores VGA (actually LCD) controller is: i
already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be
software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into
40nm, would easily compete with some of TI's offerings, as well as the
Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on
debian/testing (took a while) so it *might* not be necessary even to
pay for the ASIC design tooling (the cost of which is insane).
coriolis2 includes a reasonable auto-router. i still have yet to go
through the tutorials to see how it works. for design rules: 90nm
design rules (stacks etc.) are actually publicly available, which
would potentially mean that a clock rate of at least 300mhz would be
achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm
geometry. 65 down to 40nm would be much more preferable but may be
hard to get.
graphics: i'm going through the list of people who have done GPUs (or
parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been
modified to "the text of the GPL license plus an additional clause
which is that if you want to use this for commercial purposes then...
you can't". which is *NOT* a GPL license, it's a proprietary
commercial license!
MIAOW is just an OpenCL engine but a stonking good one that's
compatible with AMD's software. nyuzi is an experimental GPU where i
hope its developer believes in its potential. ORGFX i am currently
evaluating but it looks pretty damn good, and i think it is slightly
underestimated. i could really use some help evaluating it properly.
my feeling is that a combination of MIAOW to handle shading and ORGFX
for the rendering would be a really powerful combination.
so.
it's basically doable. comments and ideas welcome, please do edit the
page to keep track of noteshttp://rhombus-tech.net/riscv/libre_riscv/
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Print off some of the RISC-V technical documentation, write in the
link "http://lists.phcomp.co.uk/pipermail/arm-netbook/2017-April/013457.html"
and leave the copies in cafes, coffee shops, computer labs, etc. Only
leave one copy per place. Some one will get curious I'm sure, and the
rest is up to them.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-
Luke Kenneth Casson Leighton
2017-05-03 05:00:38 UTC
Permalink
Post by Pen-Yuan Hsing
I'm really really excited about a possible 100% libre RISC-V based computer.
Though I'm a backer of the most recent campaign (and can't wait to get it!
:)), I lack the knowledge/skills to actually help with the technical
development of this upcoming RISC card.
Is there anything I/we can do to help get this RISC initiative going?
just spread the word (particularly when the new campaign comes up) -
also if you know of any business people willing to invest, meet any
investors particularly those with an ethical or social focus, do put
them in touch.

also: universities. if you happen to know any university professors
ask their Electrical Engineering Dept to consider research into
RISC-V. it'll sort-of happen anyway (happening already) because it's
easy to get hold of the RISC-V design.

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-n
Pen-Yuan Hsing
2017-05-03 15:24:30 UTC
Permalink
Post by Luke Kenneth Casson Leighton
Post by Pen-Yuan Hsing
I'm really really excited about a possible 100% libre RISC-V based computer.
Though I'm a backer of the most recent campaign (and can't wait to get it!
:)), I lack the knowledge/skills to actually help with the technical
development of this upcoming RISC card.
Is there anything I/we can do to help get this RISC initiative going?
just spread the word (particularly when the new campaign comes up) -
also if you know of any business people willing to invest, meet any
investors particularly those with an ethical or social focus, do put
them in touch.
also: universities. if you happen to know any university professors
ask their Electrical Engineering Dept to consider research into
RISC-V. it'll sort-of happen anyway (happening already) because it's
easy to get hold of the RISC-V design.
l.
Thanks Luke. This just made me realise that I should bring this up the
next time I run into the local maker space. I only occasionally see
them, but will be sure to remember this!

Is the webpage designed for an interested maker/hacker or computer
science academic to easily understand?

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-***@files.phcomp
Luke Kenneth Casson Leighton
2017-05-03 15:30:42 UTC
Permalink
Thanks Luke. This just made me realise that I should bring this up the next
time I run into the local maker space. I only occasionally see them, but
will be sure to remember this!
Is the webpage designed for an interested maker/hacker or computer science
academic to easily understand?
which one? the one i'm maintaining on rhombus-tech is purely for
taking notes, so i don't lose track of the contacts / links that i
find.

also it depends on what you'd like to help with. if you'd like to
help with *this* project's efforts to create a riscv-64 SoC, then this
list and the rhombus tech wiki's a good starting point

if however you'd like to simply make people aware of risc-v in
general then the riscv.org web site's the best place to refer them to.

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large a
zap
2017-05-03 22:41:35 UTC
Permalink
Post by Luke Kenneth Casson Leighton
Thanks Luke. This just made me realise that I should bring this up the next
time I run into the local maker space. I only occasionally see them, but
will be sure to remember this!
Is the webpage designed for an interested maker/hacker or computer science
academic to easily understand?
which one? the one i'm maintaining on rhombus-tech is purely for
taking notes, so i don't lose track of the contacts / links that i
find.
also it depends on what you'd like to help with. if you'd like to
help with *this* project's efforts to create a riscv-64 SoC, then this
list and the rhombus tech wiki's a good starting point
if however you'd like to simply make people aware of risc-v in
general then the riscv.org web site's the best place to refer them to.
isn't lowrisc a better starting point though?
Post by Luke Kenneth Casson Leighton
l.
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-***@files.phco
Pen-Yuan Hsing
2017-05-04 10:31:02 UTC
Permalink
Post by Luke Kenneth Casson Leighton
Thanks Luke. This just made me realise that I should bring this up the next
time I run into the local maker space. I only occasionally see them, but
will be sure to remember this!
Is the webpage designed for an interested maker/hacker or computer science
academic to easily understand?
which one? the one i'm maintaining on rhombus-tech is purely for
taking notes, so i don't lose track of the contacts / links that i
find.
also it depends on what you'd like to help with. if you'd like to
help with *this* project's efforts to create a riscv-64 SoC, then this
list and the rhombus tech wiki's a good starting point
if however you'd like to simply make people aware of risc-v in
general then the riscv.org web site's the best place to refer them to.
l.
Sorry I wasn't clear. I was just wondering if the rhombus-tech page can
be a "landing page" that I can forward people to. But if you think
riscv.org is fine I can do that, too! That said, riscv.org probably
doesn't emphasise the libre nature of it, does it? Therefore would it be
helpful to have some sort of accessible introductory page that talks
about how RISV-V is "fun" for hacking AND its importance in 100% libre
computing?

As for me, I have no technical skills to actually help with development.
That's why I originally asked if there are other ways to help!

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-
Luke Kenneth Casson Leighton
2017-05-04 10:51:00 UTC
Permalink
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
Sorry I wasn't clear. I was just wondering if the rhombus-tech page can be a
"landing page" that I can forward people to. But if you think riscv.org is
fine I can do that, too! That said, riscv.org probably doesn't emphasise the
libre nature of it, does it?
permissive licenses being what they are.... no, true, it doesn't.
libre is a very specific meaning, and the goal is to create a *libre*
SoC.

ha, bit of irony for you: gaisler research released the LEON3 SPARCv8
core a number of years ago under the GPLv2, so that people could use
it for "academic and research purposes", the expectation being that
for "commercial" use, they would seek a license from gaisler because
you can't mix GPLv2 source with proprietary hard macro source.

the irony / beauty is: by seeking out *specifically* hard macros even
for DDR3 that are compatible with the GPL, no proprietary license is
needed :)

so... the source code which implements SMP cache coherency for a
multi-core LEON3... i can pull that out and use it :)

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments
m***@gmail.com
2017-05-04 14:29:17 UTC
Permalink
Post by Luke Kenneth Casson Leighton
ha, bit of irony for you: gaisler research released the LEON3 SPARCv8
core a number of years ago under the GPLv2, so that people could use
it for "academic and research purposes", the expectation being that
for "commercial" use, they would seek a license from gaisler because
you can't mix GPLv2 source with proprietary hard macro source.
the irony / beauty is: by seeking out *specifically* hard macros even
for DDR3 that are compatible with the GPL, no proprietary license is
needed :)
so... the source code which implements SMP cache coherency for a
multi-core LEON3... i can pull that out and use it :)
Uhhm. So your going to use the GLP'ed macro for "SMP cache coherency" from
the LEON3 SPARCv8 design into the RISC-V so you can build a Multi core
(SMP) RISC-V?

Or are you considering a new SoC SPARC design?

According to wikipedia there is also a LEON4. If it is based on the LEON3
then the source code should be available right? Or is a materialized HW
design no obligated to ship with the source since it's not binary/machine
code?
Post by Luke Kenneth Casson Leighton
l.
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Luke Kenneth Casson Leighton
2017-05-04 14:33:32 UTC
Permalink
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
Post by m***@gmail.com
Post by Luke Kenneth Casson Leighton
ha, bit of irony for you: gaisler research released the LEON3 SPARCv8
core a number of years ago under the GPLv2, so that people could use
it for "academic and research purposes", the expectation being that
for "commercial" use, they would seek a license from gaisler because
you can't mix GPLv2 source with proprietary hard macro source.
the irony / beauty is: by seeking out *specifically* hard macros even
for DDR3 that are compatible with the GPL, no proprietary license is
needed :)
so... the source code which implements SMP cache coherency for a
multi-core LEON3... i can pull that out and use it :)
Uhhm. So your going to use the GLP'ed macro for "SMP cache coherency" from
the LEON3 SPARCv8 design into the RISC-V so you can build a Multi core (SMP)
RISC-V?
yyep!
Post by m***@gmail.com
Or are you considering a new SoC SPARC design?
no. not enough mind-share
Post by m***@gmail.com
According to wikipedia there is also a LEON4. If it is based on the LEON3
then the source code should be available right?
it's not.

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-ne
m***@gmail.com
2017-04-28 10:29:33 UTC
Permalink
Post by Luke Kenneth Casson Leighton
ok so it would seem that the huge amount of work going into RISC-V
means that it's on track to becoming a steamroller that will squash
proprietary SoCs, so i'm quite happy to make sure that it's
not-so-subtly nudged in the right direction.
http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to
create a desirable mass-volume low-cost SoC, meaning that it will need
to at least do 1080p60 video decode and have 3D graphics capability.
oh... and be entirely libre.
That's one hornet nest you're going into. But I'd really like to see you
pull it off.
Post by Luke Kenneth Casson Leighton
* to create an absolute basic SoC, starting from lowRISC (64-bit),
ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a
low-cost proof-of-concept where mistakes can be iterated through
* provide the end-result to software developers so that they can have
actual real silicon to work with
* begin a first crowd-funding phase to create a 28nm (or better)
multi-core SMP SoC
for this first phase the interfaces that i've tracked down so far are
almost entirely from opencores.org, meaning that there really should
be absolutely no need to license any costly hard macros. that
*includes* a DDR3 controller (but does not include a DDR3 PHY, which
* DDR3 controller (not including PHY)
* lowRISC contains "minion cores" so can be soft-programmed to do any GPIO
* boot and debug through ZipCPU's UART (use an existing EC's on-board FLASH)
* OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
* OpenCores ULPI USB 2.0 controller
* OpenCores USB-OTG 1.1 PHY
I'm not much into HW design. But I think it would be wise to aim for USB-C
connectivity.

USB-C does not imply USB3.0 AFAIKT.

USB-C has to option of channeling USB2/3,HDMI,DP via the alternate modes
and power. So a stack of USB-C connectors on the User Facing Side would be
awesome.

It would also limit the need for other connectors and PHY's.

The problem is MUXing all modes to a single output. New Apple laptops have
USB-C but not all ports support all functions.

Perhaps a bit of FPGA could be the key?

Ethernet over UCB-C is still being discussed. So the FPGA might be handy to
have when/if that mode is materialized.

A bit of FPGA would be nice to have anyway. Media codecs keep on changing
and would extend the life of the SoC.
Post by Luke Kenneth Casson Leighton
note that there are NO ANALOG INTERFACES in that. this is *really*
important to avoid, because mixed analog and digital is incredibly
hard to get right. also note that things like HDMI, SATA, and even
ethernet are quite deliberately NOT on the list.
That's what phy's are for right?

VGA is on decline I would bother with it too much. But that's personal.
Post by Luke Kenneth Casson Leighton
Ethernet RMII (which
is digital) could be implemented in software using a minion core. the
advantage of using the opencores VGA (actually LCD) controller is: i
already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be
software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into
40nm, would easily compete with some of TI's offerings, as well as the
Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on
debian/testing (took a while) so it *might* not be necessary even to
pay for the ASIC design tooling (the cost of which is insane).
coriolis2 includes a reasonable auto-router. i still have yet to go
through the tutorials to see how it works. for design rules: 90nm
design rules (stacks etc.) are actually publicly available, which
would potentially mean that a clock rate of at least 300mhz would be
achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm
geometry. 65 down to 40nm would be much more preferable but may be
hard to get.
I Don't think speed is to much of an issue right now. Having something
workable like this, even only suitable for embedded use, would gain
traction fast enough to get attention and help for new revisions with
smaller and faster production.

Besides the max for silicon scaling is nearing. EULV is still not generally
available.

Better architectures are needed. Just like better programming.
Post by Luke Kenneth Casson Leighton
graphics: i'm going through the list of people who have done GPUs (or
parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been
modified to "the text of the GPL license plus an additional clause
which is that if you want to use this for commercial purposes then...
you can't". which is *NOT* a GPL license, it's a proprietary
commercial license!
MIAOW is just an OpenCL engine but a stonking good one that's
compatible with AMD's software. nyuzi is an experimental GPU where i
hope its developer believes in its potential. ORGFX i am currently
evaluating but it looks pretty damn good, and i think it is slightly
underestimated. i could really use some help evaluating it properly.
my feeling is that a combination of MIAOW to handle shading and ORGFX
for the rendering would be a really powerful combination.
so.
it's basically doable. comments and ideas welcome, please do edit the
page to keep track of notes http://rhombus-tech.net/riscv/libre_riscv/
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Luke Kenneth Casson Leighton
2017-04-28 11:23:45 UTC
Permalink
Post by m***@gmail.com
Post by Luke Kenneth Casson Leighton
ok so it would seem that the huge amount of work going into RISC-V
means that it's on track to becoming a steamroller that will squash
proprietary SoCs, so i'm quite happy to make sure that it's
not-so-subtly nudged in the right direction.
http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to
create a desirable mass-volume low-cost SoC, meaning that it will need
to at least do 1080p60 video decode and have 3D graphics capability.
oh... and be entirely libre.
That's one hornet nest you're going into.
yyyup. am tracking down the pieces.
Post by m***@gmail.com
But I'd really like to see you pull it off.
like a quantum electron it'll probably happen because i forget to
look backwards :)
Post by m***@gmail.com
Post by Luke Kenneth Casson Leighton
* DDR3 controller (not including PHY)
* lowRISC contains "minion cores" so can be soft-programmed to do any GPIO
* boot and debug through ZipCPU's UART (use an existing EC's on-board FLASH)
* OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
* OpenCores ULPI USB 2.0 controller
* OpenCores USB-OTG 1.1 PHY
I'm not much into HW design. But I think it would be wise to aim for USB-C
connectivity.
not for the first proof-of-concept SoC... unless the external PHY
which will be wired to the ULPI (PHY) interface happens to support
USB-C.

the *mass-volume* SoC: yes, great idea.
Post by m***@gmail.com
USB-C has to option of channeling USB2/3,HDMI,DP via the alternate modes and
power. So a stack of USB-C connectors on the User Facing Side would be
awesome.
remember that 90nm is a maximum clock rate if you're really really
lucky of 400mhz: 300mhz is more realistic. 65nm you get maybe 700mhz
absolute max.
Post by m***@gmail.com
It would also limit the need for other connectors and PHY's.
that would be a big advantage.
Post by m***@gmail.com
The problem is MUXing all modes to a single output. New Apple laptops have
USB-C but not all ports support all functions.
Perhaps a bit of FPGA could be the key?
yeah.
Post by m***@gmail.com
Ethernet over UCB-C is still being discussed. So the FPGA might be handy to
have when/if that mode is materialized.
A bit of FPGA would be nice to have anyway. Media codecs keep on changing
and would extend the life of the SoC.
at the expense of power consumption.
Post by m***@gmail.com
Post by Luke Kenneth Casson Leighton
note that there are NO ANALOG INTERFACES in that. this is *really*
important to avoid, because mixed analog and digital is incredibly
hard to get right. also note that things like HDMI, SATA, and even
ethernet are quite deliberately NOT on the list.
That's what phy's are for right?
it's not quite that simple, but yes :)
Post by m***@gmail.com
VGA is on decline I would bother with it too much. But that's personal.
yep it's out for this SoC.
Post by m***@gmail.com
Post by Luke Kenneth Casson Leighton
Ethernet RMII (which
is digital) could be implemented in software using a minion core. the
advantage of using the opencores VGA (actually LCD) controller is: i
already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be
software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into
40nm, would easily compete with some of TI's offerings, as well as the
Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on
debian/testing (took a while) so it *might* not be necessary even to
pay for the ASIC design tooling (the cost of which is insane).
coriolis2 includes a reasonable auto-router. i still have yet to go
through the tutorials to see how it works. for design rules: 90nm
design rules (stacks etc.) are actually publicly available, which
would potentially mean that a clock rate of at least 300mhz would be
achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm
geometry. 65 down to 40nm would be much more preferable but may be
hard to get.
I Don't think speed is to much of an issue right now. Having something
workable like this, even only suitable for embedded use, would gain traction
fast enough to get attention and help for new revisions with smaller and
faster production.
yeahyeah. well, the embedded market is where the RV32* is already
being targetted (sifive, pulpino) - there's nobody however who's
started on RV64 because it's a whole different beast. 64-bit is
usually deployed where performance is a priority (i.e. by definition
space-saving being diametrically the opposite isn't) and that means
DDR3 external RAM instead of e.g. 48k of *internal* SRAM... and many
other things.
Post by m***@gmail.com
Besides the max for silicon scaling is nearing. EULV is still not generally
available.
Better architectures are needed. Just like better programming.
40% better performance-watt is a good enough indicator to me.

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
m***@gmail.com
2017-04-28 11:47:31 UTC
Permalink
Post by Luke Kenneth Casson Leighton
Post by m***@gmail.com
The problem is MUXing all modes to a single output. New Apple laptops
have
Post by m***@gmail.com
USB-C but not all ports support all functions.
Perhaps a bit of FPGA could be the key?
yeah.
Post by m***@gmail.com
Ethernet over UCB-C is still being discussed. So the FPGA might be handy
to
Post by m***@gmail.com
have when/if that mode is materialized.
A bit of FPGA would be nice to have anyway. Media codecs keep on changing
and would extend the life of the SoC.
at the expense of power consumption.
If you're trying to trans-code something that you don't have a
co-processor/module for you're forced to CPU/GPU trans-coding. Would a FPGA
still be more power huns gry then?

I think/hope FPGA's are more efficient for specific tasks then CPU/GPU's

We can always have evolution create a efficient decoder ;-)
https://www.damninteresting.com/on-the-origin-of-circuits/
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.9691&rep=rep1&type=pdf
Luke Kenneth Casson Leighton
2017-04-28 12:58:57 UTC
Permalink
Post by m***@gmail.com
If you're trying to trans-code something that you don't have a
co-processor/module for you're forced to CPU/GPU trans-coding.
you may be misunderstanding: the usual way to interact with a GPU is
to use a memory buffer, drop some data in it, tell the GPU (again via
a memory location) "here, get on with it" - basically a
hardware-version of an API - and it goes an executes its *OWN*
instructions, completely independently and absolutely nothing to do
with the CPU.

there's no "transcoding" involved because they share the same memory bus.
Post by m***@gmail.com
Would a FPGA
still be more power huns gry then?
yes.
Post by m***@gmail.com
I think/hope FPGA's are more efficient for specific tasks then CPU/GPU's
you wouldn't give a general-purpose task to an FPGA, and you wouldn't
give a specialist task for which they're not suited to a CPU, GPU _or_
an FPGA: you'd give it to a custom piece of silicon.

in the case where you have something that falls outside of the custom
silicon (a newer CODEC for example) then yes, an FPGA would *possibly*
help... if and only if you have enough bandwidth.

video is RIDICULOUSLY bandwidth-hungry. 1920x1080 @ 60fps 32bpp
is... an insane data-rate. it's 470 MEGABYTES per second. that's
what the framebuffer has to handle, so you not only have to have the
HDMI (or other video) PHY capable of handling that but the CODEC
hardware has to be able to *write* - simultaneously - on the exact
same memory bus.

the point is: if you're considering using an FPGA to accelerate video
it's gonna be a *really* big and expensive FPGA, and you would need to
implement something like PCIe just to cope with the communications
between the two.

costs just escalated way beyond market value.

this is why companies just simply... abandon one SoC and do another
one which has an improved custom CODEC silicon which *does* handle the
newer CODEC(s).
Post by m***@gmail.com
We can always have evolution create a efficient decoder ;-)
https://www.damninteresting.com/on-the-origin-of-circuits/
woooow.

"It seems that evolution had not merely selected the best code for the
task, it had also advocated those programs which took advantage of the
electromagnetic quirks of that specific microchip environment. The
five separate logic cells were clearly crucial to the chip’s
operation, but they were interacting with the main circuitry through
some unorthodox method— most likely via the subtle magnetic fields
that are created when electrons flow through circuitry, an effect
known as magnetic flux. There was also evidence that the circuit was
not relying solely on the transistors’ absolute ON and OFF positions
like a typical chip; it was capitalizing upon analogue shades of gray
along with the digital black and white."

that's incredible.

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-netb
Bill Kontos
2017-04-28 14:05:37 UTC
Permalink
This is the most interesting article I've read in a long time. Like machine
learning but on an fpga... and analog!!! Comes to prove my hunch that the
binary approach to computing is not the most optimal one. Analog might be
hard but with enough investment it can give better results in the long run.
It's just it's hard enough to be considered impossible right now.
Post by Luke Kenneth Casson Leighton
Post by m***@gmail.com
If you're trying to trans-code something that you don't have a
co-processor/module for you're forced to CPU/GPU trans-coding.
you may be misunderstanding: the usual way to interact with a GPU is
to use a memory buffer, drop some data in it, tell the GPU (again via
a memory location) "here, get on with it" - basically a
hardware-version of an API - and it goes an executes its *OWN*
instructions, completely independently and absolutely nothing to do
with the CPU.
there's no "transcoding" involved because they share the same memory bus.
Post by m***@gmail.com
Would a FPGA
still be more power huns gry then?
yes.
Post by m***@gmail.com
I think/hope FPGA's are more efficient for specific tasks then CPU/GPU's
you wouldn't give a general-purpose task to an FPGA, and you wouldn't
give a specialist task for which they're not suited to a CPU, GPU _or_
an FPGA: you'd give it to a custom piece of silicon.
in the case where you have something that falls outside of the custom
silicon (a newer CODEC for example) then yes, an FPGA would *possibly*
help... if and only if you have enough bandwidth.
is... an insane data-rate. it's 470 MEGABYTES per second. that's
what the framebuffer has to handle, so you not only have to have the
HDMI (or other video) PHY capable of handling that but the CODEC
hardware has to be able to *write* - simultaneously - on the exact
same memory bus.
the point is: if you're considering using an FPGA to accelerate video
it's gonna be a *really* big and expensive FPGA, and you would need to
implement something like PCIe just to cope with the communications
between the two.
costs just escalated way beyond market value.
this is why companies just simply... abandon one SoC and do another
one which has an improved custom CODEC silicon which *does* handle the
newer CODEC(s).
Post by m***@gmail.com
We can always have evolution create a efficient decoder ;-)
https://www.damninteresting.com/on-the-origin-of-circuits/
woooow.
"It seems that evolution had not merely selected the best code for the
task, it had also advocated those programs which took advantage of the
electromagnetic quirks of that specific microchip environment. The
five separate logic cells were clearly crucial to the chip’s
operation, but they were interacting with the main circuitry through
some unorthodox method— most likely via the subtle magnetic fields
that are created when electrons flow through circuitry, an effect
known as magnetic flux. There was also evidence that the circuit was
not relying solely on the transistors’ absolute ON and OFF positions
like a typical chip; it was capitalizing upon analogue shades of gray
along with the digital black and white."
that's incredible.
l.
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Luke Kenneth Casson Leighton
2017-04-28 14:17:08 UTC
Permalink
Post by Bill Kontos
This is the most interesting article I've read in a long time. Like machine
learning but on an fpga... and analog!!!
more than that: analog on a *digital* chip! and using E.M. effects
(cross-talk) to make up the circuit! i'm just amazed... but perhaps
it should not be unexpected.

the rest of the article makes a really good point, which has me
deeply concerned now that there are fuckwits out there making
"driverless" cars, toying with people's lives in the process. you
have *no idea* what unexpected decisions are being made, what has been
"optimised out".

with aircraft it's a different matter: the skies are clear, it's a
matter of physics and engineering, and the job of taking off, landing
and changing direction is, if extremely complex, actually just a
matter of programming. also, the PILOT IS ULTIMATELY IN CHARGE.

cars - where you could get thrown unexpected completely unanticipated
scenarios involving life-and-death decisions - are a totally different
matter.

the only truly ethical way to create "driverless" cars is to create
an actual *conscious* machine intelligence with which you can have a
conversation, and *TEACH* it - through a rational conversation - what
the actual parameters are for (a) the laws of the road (b) moral
decisions regarding life-and-death situations.

applying genetic algorithms to driving of vehicles is a stupid,
stupid idea because you cannot tell what has been "optimised out" -
just as the guy from this article says.

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-***@fi
m***@gmail.com
2017-04-28 14:45:27 UTC
Permalink
Post by Luke Kenneth Casson Leighton
the rest of the article makes a really good point, which has me
deeply concerned now that there are fuckwits out there making
"driverless" cars, toying with people's lives in the process. you
have *no idea* what unexpected decisions are being made, what has been
"optimised out".
That's no different from regular "human" programming. If you employ IA
programming you still can validate the code like you would that of a normal
human.

Or build a second independ IA for the "four" eye principle.
Post by Luke Kenneth Casson Leighton
with aircraft it's a different matter: the skies are clear, it's a
matter of physics and engineering, and the job of taking off, landing
and changing direction is, if extremely complex, actually just a
matter of programming. also, the PILOT IS ULTIMATELY IN CHARGE.
cars - where you could get thrown unexpected completely unanticipated
scenarios involving life-and-death decisions - are a totally different
matter.
the only truly ethical way to create "driverless" cars is to create
an actual *conscious* machine intelligence with which you can have a
conversation, and *TEACH* it - through a rational conversation - what
the actual parameters are for (a) the laws of the road (b) moral
decisions regarding life-and-death situations.
The problem is nuance. If a cyclist crosses your path and escaping
collision can only be done by driving into a group of people waiting to
cross after you passed them. The choice seems logical: Hit the cyclist.
Many are saved by killing/injuring/bumping one.

Humans are notoriously bad in taking those decisions themselves. We only
consider the cyclist. That's our focus. The group become the second
objective.

Many people are killed/injured by trying to avoid hitting animals. You try
to avoid collision only to find you'r vehicle becoming uncontrollable or
finding a new object on your new trajectory, mostly trees.

The real crisis comes from outside control. The car can be hacked and
become weaponized. That works with humans as well but is more difficult and
takes more time. Programming humans takes time.

Or some other Asimov related issue ;-)
Post by Luke Kenneth Casson Leighton
applying genetic algorithms to driving of vehicles is a stupid,
stupid idea because you cannot tell what has been "optimised out" -
just as the guy from this article says.
l.
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Luke Kenneth Casson Leighton
2017-04-28 14:55:23 UTC
Permalink
Post by m***@gmail.com
Post by Luke Kenneth Casson Leighton
the rest of the article makes a really good point, which has me
deeply concerned now that there are fuckwits out there making
"driverless" cars, toying with people's lives in the process. you
have *no idea* what unexpected decisions are being made, what has been
"optimised out".
That's no different from regular "human" programming.
it's *massively* different. a human will follow their training,
deploy algorithms and have an *understanding* of the code and what it
does.

monte-carlo-generated iterative algorthms you *literally* have no
idea what it does or how it does it. the only guarantee that you have
is that *for the set of inputs CURRENTLY tested to date* you have
"known behaviour".

but for the cases which you haven't catered for you *literally* have
no way of knowing how the code is going to react.

now this sounds very very similar to the human case: yes you would
expect human-written code to also have to pass test suites.

but the real difference is highighted with the following question:
when it comes to previously undiscovered bugs, how the heck are you
supposed to "fix" bugs that you have *LITERALLY* no idea how the code
even works?

and that's what it really boils down to:

(a) in unanticipated circumstances you have literally no idea what
the code will do. it could do something incredibly dangerous.

(b) in unanticipated circumstances the chances of *fixing* the bug in
the genetic-derived code are precisely: zero. the only option is to
run the algorithm again but with a new set of criteria, generating an
entirely new algorithm which *again* is in the same (dangerous)
category.

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large at
Bill Kontos
2017-04-28 17:21:00 UTC
Permalink
On self driving cars atm the driver is required to sit on the driver's
position ready to engage the controls. The moment the driver touches the
gas pedal the car is under his control. So the system is designed in such a
way that the driver is actually in control. In the only accident so far in
the history of Tesla the driver was actually sleeping instead of paying
attention. Also the issue of preventing the AI from optimising out some
edge cases can be solved by carefully planning the tests that the neural
network is trained on, which includes hitting the cycler instead of the
folks in a bus stop or hitting the tree instead of the animal etc. I'm
confident this stuff has already been taken care of, but of course I would
love it if Tesla's code was open source. Although I fail to see how they
could continue making revenue if they open sourced their code( as that is
basically 50% of what they are selling).
Post by Luke Kenneth Casson Leighton
Post by m***@gmail.com
Post by Luke Kenneth Casson Leighton
the rest of the article makes a really good point, which has me
deeply concerned now that there are fuckwits out there making
"driverless" cars, toying with people's lives in the process. you
have *no idea* what unexpected decisions are being made, what has been
"optimised out".
That's no different from regular "human" programming.
it's *massively* different. a human will follow their training,
deploy algorithms and have an *understanding* of the code and what it
does.
monte-carlo-generated iterative algorthms you *literally* have no
idea what it does or how it does it. the only guarantee that you have
is that *for the set of inputs CURRENTLY tested to date* you have
"known behaviour".
but for the cases which you haven't catered for you *literally* have
no way of knowing how the code is going to react.
now this sounds very very similar to the human case: yes you would
expect human-written code to also have to pass test suites.
when it comes to previously undiscovered bugs, how the heck are you
supposed to "fix" bugs that you have *LITERALLY* no idea how the code
even works?
(a) in unanticipated circumstances you have literally no idea what
the code will do. it could do something incredibly dangerous.
(b) in unanticipated circumstances the chances of *fixing* the bug in
the genetic-derived code are precisely: zero. the only option is to
run the algorithm again but with a new set of criteria, generating an
entirely new algorithm which *again* is in the same (dangerous)
category.
l.
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Hannes Schnaitter
2017-04-28 17:54:43 UTC
Permalink
Hi,

I've done some research into this in the last couple of weeks.

Am Fri, 28 Apr 2017 20:21:00 +0300
Post by Bill Kontos
On self driving cars atm the driver is required to sit on the driver's
position ready to engage the controls. The moment the driver touches
the gas pedal the car is under his control. So the system is designed
in such a way that the driver is actually in control. In the only
accident so far in the history of Tesla the driver was actually
sleeping instead of paying attention.
This kind of car is what the SAE defined as a level 2 car. Full
autonomy ist level 5. See the Standard J3016 (costs nothing, but
needs account)[1]
Post by Bill Kontos
Also the issue of preventing
the AI from optimising out some edge cases can be solved by carefully
planning the tests that the neural network is trained on, which
includes hitting the cycler instead of the folks in a bus stop or
hitting the tree instead of the animal etc. I'm confident this stuff
has already been taken care of, but of course I would love it if
Tesla's code was open source. Although I fail to see how they could
continue making revenue if they open sourced their code( as that is
basically 50% of what they are selling).
I'm sad to say that it isn't even close to solved. The only two
concrete ideas I found are:
1. Egoism. The driver of the car always wins.
2. Utilitarianism. "The Greater Good". The best outcome for the most
people.

There is also another one which ignores most of the problem.
3. Random. Creating different possible crash cenarios and selecting one
at random.

Even if we would find utilitarianism as a good choice, we would have to
calculate the sums of the products of probabilities for harm times value
of harmed participant in the accident. Our sensors aren't even close to
good enough to calculate good probabilities and we have no idea which
value to assign to a participant.
And the sensors and computing would have to decide how many outcomes
there could be and then calculate those value-sums for each and then
take the best outcome.

Then you have to consider that in many countries the programming of
such a targeting algorithm, one that decides who is killed, would count
as planning a murder. And every casualty in an accident would be
murdered by the people that created the algorithm. Because it isn't
reacting anymore as it was for human drivers but precalculated and
planned.

Then there is the problem of who would by utilitarian cars. [2]

Greetings
Hannes

[1] http://standards.sae.org/j3016_201609/
[2] https://arxiv.org/abs/1510.03346

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to ar
John Luke Gibson
2017-04-29 04:18:14 UTC
Permalink
Post by Hannes Schnaitter
Then you have to consider that in many countries the programming of
such a targeting algorithm, one that decides who is killed, would count
as planning a murder. And every casualty in an accident would be
murdered by the people that created the algorithm. Because it isn't
reacting anymore as it was for human drivers but precalculated and
planned.
I wonder what type of prosecutor it would take to bring an ai to court?

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-
Luke Kenneth Casson Leighton
2017-04-29 04:56:09 UTC
Permalink
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
Post by John Luke Gibson
I wonder what type of prosecutor it would take to bring an ai to court?
https://en.wikipedia.org/wiki/The_Hour_of_the_Pig

:)

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send l
Allan Mwenda
2017-04-29 11:38:08 UTC
Permalink
Came here for the 64bit processor stayed for the sci-fi
Post by John Luke Gibson
Post by Hannes Schnaitter
Then you have to consider that in many countries the programming of
such a targeting algorithm, one that decides who is killed, would
count
Post by Hannes Schnaitter
as planning a murder. And every casualty in an accident would be
murdered by the people that created the algorithm. Because it isn't
reacting anymore as it was for human drivers but precalculated and
planned.
I wonder what type of prosecutor it would take to bring an ai to court?
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Luke Kenneth Casson Leighton
2017-04-29 11:44:36 UTC
Permalink
Post by Allan Mwenda
Came here for the 64bit processor stayed for the sci-fi
:)

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-***@f
Luke Kenneth Casson Leighton
2017-04-29 11:44:56 UTC
Permalink
https://hackaday.io/event/21084-open-source-chip-design-hack-chat/log/57313-open-source-chip-design-hack-chat-transcript

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send
Hannes Schnaitter
2017-04-28 14:56:30 UTC
Permalink
Hi,

On the topic of the ethics of driverless cars I'd recomment Patrick Lin's
chapter 'Why ethics matters for autonomous cars' in a mostly german book
about autonomous driving.

<http://link.springer.com/chapter/10.1007/978-3-662-45854-9_4>

Greetings
Hannes
--
Mit Dekko von meinem Ubuntu-Gerät gesendet

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-***@fil
Luke Kenneth Casson Leighton
2017-04-28 14:59:59 UTC
Permalink
Post by Hannes Schnaitter
Hi,
On the topic of the ethics of driverless cars I'd recomment Patrick Lin's
chapter 'Why ethics matters for autonomous cars' in a mostly german book
about autonomous driving.
<http://link.springer.com/chapter/10.1007/978-3-662-45854-9_4>
thanks hannes

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Se
m***@gmail.com
2017-04-28 14:31:46 UTC
Permalink
Post by Luke Kenneth Casson Leighton
in the case where you have something that falls outside of the custom
silicon (a newer CODEC for example) then yes, an FPGA would *possibly*
help... if and only if you have enough bandwidth.
That is what I was talking about.
Post by Luke Kenneth Casson Leighton
is... an insane data-rate. it's 470 MEGABYTES per second. that's
what the framebuffer has to handle, so you not only have to have the
HDMI (or other video) PHY capable of handling that but the CODEC
hardware has to be able to *write* - simultaneously - on the exact
same memory bus.
I Overestimated the capabilities of an FPGA. I've just read You need
two/four FPGA linked to do H264 in realtime. Or a full new one. FPGA's also
are usually are very slow.

I found a nice presentation on using FPGA's for video codecs.
https://www.ece.cmu.edu/~ece796/seminar/10/seminar/FPGA.ppt

The most facinating option I found is to reconfigure the FPGA for each
processing step.
Post by Luke Kenneth Casson Leighton
the point is: if you're considering using an FPGA to accelerate video
it's gonna be a *really* big and expensive FPGA, and you would need to
implement something like PCIe just to cope with the communications
between the two.
costs just escalated way beyond market value.
this is why companies just simply... abandon one SoC and do another
one which has an improved custom CODEC silicon which *does* handle the
newer CODEC(s).
Hmm. So for longevity the video decoder should be outside the SoC and be
serviceable... Nah just buy a new EOMA card and keep the rest. ;-)

Speaking of which any plans for a en/decoder module(IP Block is therm
right?) in the new SoC? Or leaving that out?
Luke Kenneth Casson Leighton
2017-04-28 14:41:19 UTC
Permalink
Post by m***@gmail.com
Speaking of which any plans for a en/decoder module(IP Block is therm
right?) in the new SoC? Or leaving that out?
opencores has some codecs and VP8 and VP9 are available for production SoCs.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large att
Bill Kontos
2017-04-28 22:56:27 UTC
Permalink
Out of curiosity has anyone ever attempted to prototype a hardware block
based on evolution principles? Doing it on an fpga is probably a bad idea
since we wont be able to implement the results in more copies but this
could potentially also happen in a software simulation where the input and
output interfaces of the hardware block are pre defined
Post by m***@gmail.com
Post by m***@gmail.com
Post by m***@gmail.com
The problem is MUXing all modes to a single output. New Apple laptops
have
Post by m***@gmail.com
USB-C but not all ports support all functions.
Perhaps a bit of FPGA could be the key?
yeah.
Post by m***@gmail.com
Ethernet over UCB-C is still being discussed. So the FPGA might be
handy to
Post by m***@gmail.com
have when/if that mode is materialized.
A bit of FPGA would be nice to have anyway. Media codecs keep on
changing
Post by m***@gmail.com
and would extend the life of the SoC.
at the expense of power consumption.
If you're trying to trans-code something that you don't have a
co-processor/module for you're forced to CPU/GPU trans-coding. Would a FPGA
still be more power huns gry then?
I think/hope FPGA's are more efficient for specific tasks then CPU/GPU's
We can always have evolution create a efficient decoder ;-)
https://www.damninteresting.com/on-the-origin-of-circuits/
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.
50.9691&rep=rep1&type=pdf
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
ryan
2017-04-29 03:51:49 UTC
Permalink
Post by Bill Kontos
Out of curiosity has anyone ever attempted to prototype a hardware
block based on evolution principles? Doing it on an fpga is probably a
bad idea since we wont be able to implement the results in more copies
but this could potentially also happen in a software simulation where
the input and output interfaces of the hardware block are pre defined
<snip>
I suspect that without having the feature of it being an instruction set
that only works on that one chip due to it exploiting the quirks of the
chip, some efficiency would be lost.

I'm imagining a system where traditional silicone grooms many FPGAs,
each with a dedicated task, and the system is provided with some
known-good instruction sets that work, but only slowly. So then either
the OEM or the user sets up their fancy new system, and one of the steps
is to plug it in and run a setup program for anywhere from a few hours
to a couple of days which iterates the instructions to improve
efficiency, then they can begin to use their system.

As for using this method in a software simulation, I wouldn't be
surprised if some chip manufacturers already do that for certain
sections of the chip, even if its only during the early design faze. I
would imagine the software guiding the evolution could be instructed to
cull anything that isn't working with binary, thus allowing human
engineers/programmers to more easily reverse engineer the instruction
set and further edit it.
Hendrik Boom
2017-04-29 13:57:54 UTC
Permalink
Post by ryan
Post by Bill Kontos
Out of curiosity has anyone ever attempted to prototype a
hardware block
Post by ryan
Post by Bill Kontos
based on evolution principles? Doing it on an fpga is probably a bad idea
since we wont be able to implement the results in more copies but this
could potentially also happen in a software simulation where the input
and output interfaces of the hardware block are pre defined
<snip>
I suspect that without having the feature of it being an instruction set
that only works on that one chip due to it exploiting the quirks of the
chip, some efficiency would be lost.
I'm imagining a system where traditional silicone grooms many FPGAs, each
with a dedicated task, and the system is provided with some known-good
instruction sets that work, but only slowly. So then either the OEM or the
user sets up their fancy new system, and one of the steps is to plug it in
and run a setup program for anywhere from a few hours to a couple of days
which iterates the instructions to improve efficiency, then they can begin
to use their system.
As for using this method in a software simulation, I wouldn't be surprised
if some chip manufacturers already do that for certain sections of the
chip, even if its only during the early design faze. I would imagine the
software guiding the evolution could be instructed to cull anything that
isn't working with binary, thus allowing human engineers/programmers to
more easily reverse engineer the instruction set and further edit it.
Let me remind you of a real-world situation. The hardware designers
were woring on the second version of their successful CPU. They
attached some counters to masure hos many times the varioun
instructons were being executed. THey discovered that the most
common instructons were certain test and brnch instructions. So they
worked hard on making sure the next model had the most efficient
implementation of those test and branch instructions they could
achieve.

But when they finally put the new machine together and tried it out,
they foud no improvement at all.

Investigating, they discovered they had optimized the wait loop.

-- hendrik

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to
Luke Kenneth Casson Leighton
2017-04-29 17:37:00 UTC
Permalink
Post by Hendrik Boom
Let me remind you of a real-world situation. The hardware designers
were woring on the second version of their successful CPU. They
attached some counters to masure hos many times the varioun
instructons were being executed. THey discovered that the most
common instructons were certain test and brnch instructions. So they
worked hard on making sure the next model had the most efficient
implementation of those test and branch instructions they could
achieve.
But when they finally put the new machine together and tried it out,
they foud no improvement at all.
Investigating, they discovered they had optimized the wait loop.
that is ffrickin funny. but also relevant, as i am aware of for
example the ICT's efforts to add x86-accelerating instructions to the
Loongson 2G architecture. although a MIPS64 they added
hardware-emulation of the "top" 200 x86 instructions to achieve a qemu
emulation that was 70% of actual x86 clock-rates.

which got me thinking: how the heck would you guage which actual
instructions were "top"? would it be better instead to measure
_power_ consumption per instruction, aiming for better
performance/watt?

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send
m***@gmail.com
2017-04-28 10:35:19 UTC
Permalink
Post by Luke Kenneth Casson Leighton
ok so it would seem that the huge amount of work going into RISC-V
means that it's on track to becoming a steamroller that will squash
proprietary SoCs, so i'm quite happy to make sure that it's
not-so-subtly nudged in the right direction.
http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to
create a desirable mass-volume low-cost SoC, meaning that it will need
to at least do 1080p60 video decode and have 3D graphics capability.
oh... and be entirely libre.
* to create an absolute basic SoC, starting from lowRISC (64-bit),
ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a
low-cost proof-of-concept where mistakes can be iterated through
* provide the end-result to software developers so that they can have
actual real silicon to work with
* begin a first crowd-funding phase to create a 28nm (or better)
multi-core SMP SoC
for this first phase the interfaces that i've tracked down so far are
almost entirely from opencores.org, meaning that there really should
be absolutely no need to license any costly hard macros. that
*includes* a DDR3 controller (but does not include a DDR3 PHY, which
* DDR3 controller (not including PHY)
* lowRISC contains "minion cores" so can be soft-programmed to do any GPIO
* boot and debug through ZipCPU's UART (use an existing EC's on-board FLASH)
Perhaps put it sirectly to an USB bridge. UART's on debugging hardware is
non existant. We all use FTDI dongles.

Look like OpenCores has a module. https://opencores.org/project,usb2uart
Post by Luke Kenneth Casson Leighton
* OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
* OpenCores ULPI USB 2.0 controller
* OpenCores USB-OTG 1.1 PHY
note that there are NO ANALOG INTERFACES in that. this is *really*
important to avoid, because mixed analog and digital is incredibly
hard to get right. also note that things like HDMI, SATA, and even
ethernet are quite deliberately NOT on the list. Ethernet RMII (which
is digital) could be implemented in software using a minion core. the
advantage of using the opencores VGA (actually LCD) controller is: i
already have the full source for a *complete* linux driver.
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be
software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into
40nm, would easily compete with some of TI's offerings, as well as the
Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on
debian/testing (took a while) so it *might* not be necessary even to
pay for the ASIC design tooling (the cost of which is insane).
coriolis2 includes a reasonable auto-router. i still have yet to go
through the tutorials to see how it works. for design rules: 90nm
design rules (stacks etc.) are actually publicly available, which
would potentially mean that a clock rate of at least 300mhz would be
achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm
geometry. 65 down to 40nm would be much more preferable but may be
hard to get.
graphics: i'm going through the list of people who have done GPUs (or
parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been
modified to "the text of the GPL license plus an additional clause
which is that if you want to use this for commercial purposes then...
you can't". which is *NOT* a GPL license, it's a proprietary
commercial license!
MIAOW is just an OpenCL engine but a stonking good one that's
compatible with AMD's software. nyuzi is an experimental GPU where i
hope its developer believes in its potential. ORGFX i am currently
evaluating but it looks pretty damn good, and i think it is slightly
underestimated. i could really use some help evaluating it properly.
my feeling is that a combination of MIAOW to handle shading and ORGFX
for the rendering would be a really powerful combination.
so.
it's basically doable. comments and ideas welcome, please do edit the
page to keep track of notes http://rhombus-tech.net/riscv/libre_riscv/
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Luke Kenneth Casson Leighton
2017-04-28 11:26:50 UTC
Permalink
---
crowd-funded eco-conscious hardware: https://www.crowdsupply.com/eoma68
Post by m***@gmail.com
Perhaps put it sirectly to an USB bridge.
no. very important to keep it simple. by wiring something like an
STM32 directly to the 2 UART wires the STM32 can do the job of an FTDI
dongle, but it can also be reprogrammed into an openocd interface, as
well as contain the bootloader (and plug that manually directly into
SRAM over the debug interface).

i discussed this all with dan, the developer of zipcpu, it's what he
*already* does.
Post by m***@gmail.com
UART's on debugging hardware is
non existant. We all use FTDI dongles.
Look like OpenCores has a module. https://opencores.org/project,usb2uart
spoke to the developer of zipcpu (dan). he's the one that has the
DDR3 controller on opencores.

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send
d***@mail.com
2017-05-07 20:26:10 UTC
Permalink
I apologize for DOS'ing the list, I can only get online about once a week.

On Fri, 28 Apr 2017 19:54:43 +0200
Post by Hannes Schnaitter
Hi,
I've done some research into this in the last couple of weeks.
Am Fri, 28 Apr 2017 20:21:00 +0300
<snip>
Post by Hannes Schnaitter
I'm sad to say that it isn't even close to solved. The only two
1. Egoism. The driver of the car always wins.
2. Utilitarianism. "The Greater Good". The best outcome for the most
people.
There is also another one which ignores most of the problem.
3. Random. Creating different possible crash cenarios and selecting one
at random.
<snip>
Post by Hannes Schnaitter
Then there is the problem of who would by utilitarian cars. [2]
Greetings
Hannes
I would.
It's what I'd do in real life.
I've had similar choices before. I have no regrets about placing my own
life on the line either.


Sincerely,
David

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large
David Niklas
2017-05-08 03:00:44 UTC
Permalink
I apologize for DOS'ing the list, I can only get online about once a week.

On Fri, 28 Apr 2017 12:35:19 +0200
2017-04-27 13:21 GMT+02:00 Luke Kenneth Casson Leighton
Post by Luke Kenneth Casson Leighton
ok so it would seem that the huge amount of work going into RISC-V
means that it's on track to becoming a steamroller that will squash
proprietary SoCs, so i'm quite happy to make sure that it's
not-so-subtly nudged in the right direction.
http://rhombus-tech.net/riscv/libre_riscv/ and the general goal is to
create a desirable mass-volume low-cost SoC, meaning that it will need
to at least do 1080p60 video decode and have 3D graphics capability.
oh... and be entirely libre.
* to create an absolute basic SoC, starting from lowRISC (64-bit),
ORGFX (3D graphics) and MIAOW (OpenCL engine), in at least 90nm as a
low-cost proof-of-concept where mistakes can be iterated through
* provide the end-result to software developers so that they can have
actual real silicon to work with
* begin a first crowd-funding phase to create a 28nm (or better)
multi-core SMP SoC
Links?
Post by Luke Kenneth Casson Leighton
for this first phase the interfaces that i've tracked down so far are
almost entirely from opencores.org, meaning that there really should
be absolutely no need to license any costly hard macros. that
*includes* a DDR3 controller (but does not include a DDR3 PHY, which
* DDR3 controller (not including PHY)
* lowRISC contains "minion cores" so can be soft-programmed to do any
GPIO
* boot and debug through ZipCPU's UART (use an existing EC's on-board
FLASH)
Perhaps put it sirectly to an USB bridge. UART's on debugging hardware
is non existant. We all use FTDI dongles.
Look like OpenCores has a module. https://opencores.org/project,usb2uart
Post by Luke Kenneth Casson Leighton
* OpenCores VGA controller (actually it's an LCD RGB/TTL controller)
* OpenCores ULPI USB 2.0 controller
* OpenCores USB-OTG 1.1 PHY
note that there are NO ANALOG INTERFACES in that. this is *really*
important to avoid, because mixed analog and digital is incredibly
hard to get right. also note that things like HDMI, SATA, and even
ethernet are quite deliberately NOT on the list. Ethernet RMII (which
is digital) could be implemented in software using a minion core. the
advantage of using the opencores VGA (actually LCD) controller is: i
already have the full source for a *complete* linux driver.
Considering that analog was around *long* before digital I'm surprised
that it is "Hard to get right", is there a reason for this?
Isn't there a chip for just this kind of thing?
Post by Luke Kenneth Casson Leighton
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be
software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into
40nm, would easily compete with some of TI's offerings, as well as the
Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on
debian/testing (took a while) so it *might* not be necessary even to
Hmm, I can't seem to google that piece of SW. Do you have a link?
Post by Luke Kenneth Casson Leighton
pay for the ASIC design tooling (the cost of which is insane).
coriolis2 includes a reasonable auto-router. i still have yet to go
through the tutorials to see how it works. for design rules: 90nm
design rules (stacks etc.) are actually publicly available, which
would potentially mean that a clock rate of at least 300mhz would be
achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm
geometry. 65 down to 40nm would be much more preferable but may be
hard to get.
How would you get it in the first place?
Is there a company dedicated to larger than industry standard (nm)
silicon production for small businesses, or are you planning to buy a ...
what would it be called? ... printed wafer producer?
Post by Luke Kenneth Casson Leighton
graphics: i'm going through the list of people who have done GPUs (or
parts of one). MIAOW, Nyuzi, ORGFX. the gplgpu isn't gpl. it's been
modified to "the text of the GPL license plus an additional clause
which is that if you want to use this for commercial purposes then...
you can't". which is *NOT* a GPL license, it's a proprietary
commercial license!
MIAOW is just an OpenCL engine but a stonking good one that's
compatible with AMD's software. nyuzi is an experimental GPU where i
hope its developer believes in its potential. ORGFX i am currently
evaluating but it looks pretty damn good, and i think it is slightly
underestimated. i could really use some help evaluating it properly.
my feeling is that a combination of MIAOW to handle shading and ORGFX
for the rendering would be a really powerful combination.
<snip>

What about Vulkan?
I'm asking, because it is multithreaded, as opposed to OpenGL. I've also
heard, though perhaps the person was wrong, that it is supposed to
replace OpenCL.


_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-netbo
Luke Kenneth Casson Leighton
2017-05-08 05:24:08 UTC
Permalink
Post by d***@mail.com
I apologize for DOS'ing the list, I can only get online about once a week.
not a problem david
Post by d***@mail.com
Links?
http://rhombus-tech.net/riscv/libre_riscv/
Post by d***@mail.com
Post by Luke Kenneth Casson Leighton
important to avoid, because mixed analog and digital is incredibly
hard to get right. also note that things like HDMI, SATA, and even
ethernet are quite deliberately NOT on the list. Ethernet RMII (which
is digital) could be implemented in software using a minion core. the
advantage of using the opencores VGA (actually LCD) controller is: i
already have the full source for a *complete* linux driver.
Considering that analog was around *long* before digital I'm surprised
that it is "Hard to get right",
analog isn't "hard". digital isn't "hard". specifically *MIXING*
them is ultra-hard.
Post by d***@mail.com
is there a reason for this?
completely different processes and design criteria. the restrictions
(design rules) placed on digital ASIC layouts have to be adhered to in
the *analog* areas: you can't just change the stack to suit the analog
areas. i don't know the full details, but i know someone with 30
years experience of working with ASICs who does.
Post by d***@mail.com
Isn't there a chip for just this kind of thing?
no. not a custom one... and we're taking custom ASICs.
Post by d***@mail.com
Post by Luke Kenneth Casson Leighton
I2C, SPI, SD/MMC, UART, EINT and GPIO - all of these can be
software-programmed as bit-banging in the minion cores.
these interfaces, amazingly, are enough to do an SoC that, if put into
40nm, would easily compete with some of TI's offerings, as well as the
Allwinner R8 (aka A13).
i've also managed to get alliance and coriolis2 compiled on
debian/testing (took a while) so it *might* not be necessary even to
Hmm, I can't seem to google that piece of SW. Do you have a link?
https://soc-extras.lip6.fr/en/coriolis/coriolis2-users-guide/
https://soc-extras.lip6.fr/en/alliance-abstract-en/
Post by d***@mail.com
Post by Luke Kenneth Casson Leighton
pay for the ASIC design tooling (the cost of which is insane).
coriolis2 includes a reasonable auto-router. i still have yet to go
through the tutorials to see how it works. for design rules: 90nm
design rules (stacks etc.) are actually publicly available, which
would potentially mean that a clock rate of at least 300mhz would be
achievable: interestingly 800mhz DDR3 RAM from 2012 used 90nm
geometry. 65 down to 40nm would be much more preferable but may be
hard to get.
How would you get it in the first place?
Is there a company dedicated to larger than industry standard (nm)
silicon production for small businesses, or are you planning to buy a ...
what would it be called? ... printed wafer producer?
not confirmed that yet. there are some standards that can be adhered
to which apparently make the choice of foundry irrelevant. still lots
to do here.
Post by d***@mail.com
What about Vulkan?
that sounds like software. underlying hardware would be the same.

l.

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-***@files.phcom
d***@mail.com
2017-05-07 20:26:06 UTC
Permalink
I apologize for DOS'ing the list, I can only get online about once a week.

On Fri, 28 Apr 2017 13:58:57 +0100
Post by Luke Kenneth Casson Leighton
Post by m***@gmail.com
If you're trying to trans-code something that you don't have a
co-processor/module for you're forced to CPU/GPU trans-coding.
you may be misunderstanding: the usual way to interact with a GPU is
to use a memory buffer, drop some data in it, tell the GPU (again via
a memory location) "here, get on with it" - basically a
hardware-version of an API - and it goes an executes its *OWN*
instructions, completely independently and absolutely nothing to do
with the CPU.
there's no "transcoding" involved because they share the same memory
bus.
Post by m***@gmail.com
Would a FPGA
still be more power huns gry then?
yes.
Post by m***@gmail.com
I think/hope FPGA's are more efficient for specific tasks then
CPU/GPU's
you wouldn't give a general-purpose task to an FPGA, and you wouldn't
give a specialist task for which they're not suited to a CPU, GPU _or_
an FPGA: you'd give it to a custom piece of silicon.
I always thought that FPGA's were good for prototyping or small fast
tasks... But that's just how I learned about them.
Post by Luke Kenneth Casson Leighton
in the case where you have something that falls outside of the custom
silicon (a newer CODEC for example) then yes, an FPGA would *possibly*
help... if and only if you have enough bandwidth.
is... an insane data-rate. it's 470 MEGABYTES per second. that's
what the framebuffer has to handle, so you not only have to have the
HDMI (or other video) PHY capable of handling that but the CODEC
hardware has to be able to *write* - simultaneously - on the exact
same memory bus.
<snip>
Your number seemed off to me so I did the math:
1920*1080*60*4 ==
497,664,000
You're off by almost 30 MiB.
Most video cameras (that I've been able to locate), do 24bpp, 640x480 at
30fps, so that would make the bandwidth requirements.
27,648,000
Which should be more reasonable for an FPGA (not that I have all the
specs sitting in front of me, mind you).
I am assuming that "encoding video" means encoding from a video camera or
a small youtube video as opposed to encoding to send to a device over,
say, an HDMI cable.
Post by Luke Kenneth Casson Leighton
Post by m***@gmail.com
We can always have evolution create a efficient decoder ;-)
https://www.damninteresting.com/on-the-origin-of-circuits/
<snip>

Sincerely,
David

_______________________________________________
arm-netbook mailing list arm-***@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send lar
m***@gmail.com
2017-05-08 07:42:36 UTC
Permalink
Post by d***@mail.com
I apologize for DOS'ing the list, I can only get online about once a week.
On Fri, 28 Apr 2017 13:58:57 +0100
Post by Luke Kenneth Casson Leighton
Post by m***@gmail.com
If you're trying to trans-code something that you don't have a
co-processor/module for you're forced to CPU/GPU trans-coding.
you may be misunderstanding: the usual way to interact with a GPU is
to use a memory buffer, drop some data in it, tell the GPU (again via
a memory location) "here, get on with it" - basically a
hardware-version of an API - and it goes an executes its *OWN*
instructions, completely independently and absolutely nothing to do
with the CPU.
there's no "transcoding" involved because they share the same memory
bus.
Post by m***@gmail.com
Would a FPGA
still be more power huns gry then?
yes.
Post by m***@gmail.com
I think/hope FPGA's are more efficient for specific tasks then
CPU/GPU's
you wouldn't give a general-purpose task to an FPGA, and you wouldn't
give a specialist task for which they're not suited to a CPU, GPU _or_
an FPGA: you'd give it to a custom piece of silicon.
I always thought that FPGA's were good for prototyping or small fast
tasks... But that's just how I learned about them.
Don't think of what you were thought. Think of what you can do which has
not been thought.

The world outside the box is bigger than the on inside the box ;-)
Post by d***@mail.com
Post by Luke Kenneth Casson Leighton
in the case where you have something that falls outside of the custom
silicon (a newer CODEC for example) then yes, an FPGA would *possibly*
help... if and only if you have enough bandwidth.
is... an insane data-rate. it's 470 MEGABYTES per second. that's
what the framebuffer has to handle, so you not only have to have the
HDMI (or other video) PHY capable of handling that but the CODEC
hardware has to be able to *write* - simultaneously - on the exact
same memory bus.
<snip>
1920*1080*60*4 ==
497,664,000
You're off by almost 30 MiB.
497,664,000 ~= 498 MB (Units of 1000)
497,664,000 ~= 475 MiB (Units of 1024)
Post by d***@mail.com
Most video cameras (that I've been able to locate), do 24bpp, 640x480 at
30fps, so that would make the bandwidth requirements.
27,648,000
I was specifically hinting at decoding. That's the most used function. But
encoding should these days also be capable of FullHD
Post by d***@mail.com
Which should be more reasonable for an FPGA (not that I have all the
specs sitting in front of me, mind you).
I am assuming that "encoding video" means encoding from a video camera or
a small youtube video as opposed to encoding to send to a device over,
say, an HDMI cable.
The problem is that the FPGA has to be very big or very fast. FPGA are,
apparently, not very fast thus they need to be big. Bandwith x speed.
The'res not enough space.
Post by d***@mail.com
Post by Luke Kenneth Casson Leighton
Post by m***@gmail.com
We can always have evolution create a efficient decoder ;-)
https://www.damninteresting.com/on-the-origin-of-circuits/
<snip>
Sincerely,
David
_______________________________________________
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Loading...