Quantcast
Channel: Altera Forums
Viewing all 19390 articles
Browse latest View live

Cyclone V SoC / Boot from QSPI issue with Quartus 15.0 and Angstrom distribution

$
0
0
Hi,

I'm trying to boot Linux (Angstrom distribution) from QSPI flash with the following files generated with Quartus 15.0: preloader, uboot, rbf file, dtb file.

With Quartus 13.1, I can boot Linux (Poky distribution) from QSPI flash. This works on the Arrow SoCkit board and a custom board too.
To boot from QSPI with Quartus 13.1, I followed the instruction on rocketboards website (https://rocketboards.org/foswiki/vie...SRD131QspiBoot) (how generate the files, where store the files in QSPI, how edit preloader and uboot to boot from QSPI, How compile a minimalist rootfs, ...).

I want to upgrade Quartus version from 13.1 to 15.0, because some bugs were fixed regarding preloader and uboot issues (HPS register configuration bugs if LOAN I/Os are used). But I read, if I'm using Quartus 15.0, I need to use Angstrom distribution.
I cannot boot from QSPI with Quartus 15.0 and Angstrom Distribution. No documentation is available on the rocketboards website (only available for Quartus 13.1).

Do you know where I can find a tutorial how to boot from QSPI with Quartus 15.0?

If I use preloader and uboot files compiled on Quartus 15.0 and rbf /dtb files on Quartus 13.0, I can boot Poky distri. from QSPI. The LOAN I/O bugs are solved...

Quartus is installed on Windows 10 x64 and the compilations of zImage and rootfs are performed on Ubuntu 12.04 x64.

Thanks for your help

Regards,
Julien

My design worked with Quartus 7 but does not with quartus 11

$
0
0
Hello everyone,

I have a design that work with Quartus 7 but when I re-synthesized that same design with Quartus 11, it does not work anymore. For those of you who have already faced the same problem, could you tell me where could be the problem? (timing analysis?)

Regards,

Ny

how can i include fun.c in nios

$
0
0
hi everyone
i put some functions in fun.c, try to make the main.c easy to read.
how can i include fun.c in main.c? where should i put the fun.c?
i tried to put it in the bsp floder, but the eclipse build the project with many errors
#include "alt_types.h"
//#include "altera_avalon_pio_regs.h"
#include <io.h>
#include "sys/alt_irq.h"
#include "system.h"
#include <stdio.h>
#include <unistd.h>
#include "timer_fun.c" // this is my own fun
#include "ft245_driver_fun.c" // this is my own fun
#include "biss_driver_fun.c" // this is my own fun

Cyclone V GT, Custom PHY IP Core

$
0
0
Hi,
I am using a Cyclone V GT FPGA and and HSMC SFP card trying to implement a loopback test, through fiber optics. Since now, it is successfully implemented according to a tutorial I found based on a Qsys system. The system uses the Custom PHY IP Core, and a data pattern generator/checker without 8b/10b encoding/decoding. I have tried to remove the pattern generator and checker and disable the Avalon interface of Custom PHY, because I want to send my own frame through Tx side and observe it from Rx side. Additionally, 8b/10b encoding has to be used, but I cant figure out the customization of options(parameters) about that and the setup of alignment. Should there be a part, where I am supposed to describe my sent data format(like: data, data, control), for alignment sake? Is there any way to do that, cause right now by exporting the parallel data input and output and by using SignalTap, I can only observe a non-deterministic shifting(or drifting) of output data compared to input data, which is probably synchronization issue.

Thanks
Bill
:confused::confused::confused:

Mismatch CycloneV (5CSEBA5U19) package with PCB Footprint. How to solve it?

$
0
0
Hi,
In a design I am using the 5CSEBA5U19 with 484-Pin UBGA Win Bond -A:1.90 (0.8 mm pitch).
However the PCB made has a footprint for a 484-Pin FBGA Win Bond - A:2.00 (1.0 mm pitch).

Does anybody has an idea how to solve this mismatch?
Or are there pitch converter adapter PCBs available?
I didn't find any pin compatible CycloneV device yet, although suggestions are welcome.

Thank you very much in advance for your suggestions and help.

Kind regards

GPIO interrupt under UC/OS2-TCPIP

$
0
0
Hello,
I'm working with DE1-SOC board with Cyclone V. I'm trying to implement push button interrupt under uc/os-II (which runs on ARM cortex A9 HPS). Does anyone knows how to to this?
(I'm using this os because i need Ethernet capabilities and real time os.)
thanks!!!

Newbie Qsys Question

$
0
0
Hello, here's a quick question, does using Qsys imply that one must instantiate a Nios processor? I'm looking at using Rapid Serial IO or PCIe on my FPGA to interface with a TI DSP chip. But I don't want to use a NIOS processor on the FPGA.

Thanks,
Joe

Modelsim Simulation VHO vs VHDL files

$
0
0
I realized there is difference in output obtained from Altera-Modelsim simulation using .vho file (full compilation file from Quartus) and using the vhdl files directly.
Any explanations for this and which method should I go for?

Thank you

Quartus 9.1 installation problem on W10

$
0
0
Hello,
I am trying to install Quartus 9.1 service pack2 we edition on W10.
Getting this error:


How can I solve it?

Thanks
Attached Images

Modular ADC - MAX10

$
0
0
Hello,

I`m currently trying to make the ADC of a MAX10 board work but I can`t seem to get my interruption correctly.

I could configure the modular ADC to the sequence mode with NIOS correctly, but the interruption turned out to be a big hassle in my life.


The code below is my int, which I try to initiate the interruption with the API "altera_modular_adc_init" and I try to set up the API "alt_adc_register_callback" (Which doesn't make sense to me... Why do we need it?). Also, I put a "alt_adc_word_read" afterwards cause when I enter the while loop my code does not go to the ISR, which was supposed to go (as I wished). The ISR declaration is
Code:

static void alt_ic_isr(void* context, alt_u32 id)
.

Code:

int main(void){

        //Shown at the GUI that the program has started
        printf("*** The ADC has started ***\n");


        alt_u32 *adc_data_ptr;
        alt_u32 line_in_data;




        printf("*** Configure and start the sample_store config ***\n");




        void* edge_capture_ptr = (void*) &edge_capture;


    altera_modular_adc_init(edge_capture_ptr,
                                                    MODULAR_ADC_SAMPLE_STORE_CSR_IRQ_INTERRUPT_CONTROLLER_ID,
                                                    MODULAR_ADC_SAMPLE_STORE_CSR_IRQ
                                                    );
    /* Arguments:
    * - *dev: Pointer to adc device (instance) structure.
    * - callback: Pointer to callback routine to execute at interrupt level
          - *context: Pointer to adc device structure.
    * - sample_store_base: Base address of the sample store micro core.
    * */


    alt_adc_callback adc_callback_ptr;


    alt_modular_adc_dev* devIRQ;
    devIRQ=altera_modular_adc_open (MODULAR_ADC_SAMPLE_STORE_CSR_NAME);
    alt_adc_register_callback(
                    devIRQ,
                    adc_callback_ptr,
                    edge_capture_ptr,
                    MODULAR_ADC_SAMPLE_STORE_CSR_BASE
                    );


    alt_adc_word_read (MODULAR_ADC_SAMPLE_STORE_CSR_BASE, adc_data_ptr, MODULAR_ADC_SAMPLE_STORE_CSR_CSD_LENGTH);




    int adcSampleStorageIRQStatus=10;
        int adcInterruptAsserted=10;
          printf("*** All enabled ***\n");
        while(1){
                adcSampleStorageIRQStatus=READ_ALTERA_MODULAR_ADC_SAMPLE_STORAGE_IRQ_STATUS(MODULAR_ADC_SAMPLE_STORE_CSR_BASE);
                printf("IRQ Status: %d\n",adcSampleStorageIRQStatus);
                adcInterruptAsserted=adc_interrupt_asserted(MODULAR_ADC_SAMPLE_STORE_CSR_BASE);
                printf("Interrupt Asserted?: %d\n",adcInterruptAsserted);
        }
return 0;
}

I've tried to do many different things, use different drivers and I just didn't get any result. Someone could help me out? Thanks!

clock problem

$
0
0
greetings
hi guys
am having an assignment which it basically a car park machine
it has 4 inputs
which is fifty , twenty, ten, ticket
and i need to detect the input and deduct it from an amount of 1.50
so basically i made the design for the stages
and the decoder to view it in four 7-seg bcd
which starts with
insert ticket
then remove ticket
then the 1.50 to check the input of the user
and heres my problem comes
when i apply the code in de1 altera board
the 7-seg doesn't show the first two stages it goes straight away to the 1.50 stage , that happens due to the clock is so fast
even the inputs, lets say i pressed the input 50 the whole process can be done with single press
and am using 50MHZ clock
so i tried to use a clock divider code to reduce it to 1HZ
but it seems that the problem is still there
so is there any way that i can slow the clock so the first two stages can be shown to the user
---------------------------------------------------------------------------
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;


entity clk200 is
Port (
clk_in : in STD_LOGIC;
rst1 : in STD_LOGIC;
clk_out: out STD_LOGIC
);
end clk200;


architecture Behavioral of clk200 is
signal temporal: STD_LOGIC;
signal counter : integer range 0 to 49999999 := 0;
begin
frequency_divider: process (rst1, clk_in) begin
if (rst1 = '1') then
temporal <= '0';
counter <= 0;
elsif rising_edge(clk_in) then
if (counter = 49999999) then
temporal <= NOT(temporal);
counter <= 0;
else
counter <= counter + 1;
end if;
end if;
end process;

clk_out <= temporal;
end Behavioral;
--------------------------------------------------------------------

what i need is method that slow down the clock to the lowest
thank

FPGAs for robotics vision camera with 10Ge interface (10 gigabit ethernet)

$
0
0
I need to design a high-performance camera for robotics vision applications, and I'm having massive difficulty figuring out how to select the best combination of FPGA and PHY (~10Gbps transceivers).

I assume the best existing interface is 10Ge (10 gigabit ethernet), but I'm open to better alternatives if they exist.

In a working system, the cables from 4 of these cameras will need to plug into one [or two] interface cards in a high-speed, many-core PC.

At absolute minimum the cables between camera and PC need to be 30 meters, but they should be 90 to 100 meters. In many cases the 4 cameras will be in fixed positions (the four corners of a room at ceiling level), but in many other cases the 4 cameras will be attached to the moving robot (robotic device), presumably with cables hanging down from above. Therefore, an interface and cable system that remains reliable even as the robotic devices and attached cables move is highly desirable.

In making our choices of components and interfaces, the price of the entire system must be minimized (not one specific component). The following is a list of components and interfaces under consideration (other components that are not variable (like image sensor, case, PCB, etc) are not shown):

- FPGA (cyclone, arria, etc)
- 10Ge PHY (manufacturer part number)
- FPGA <==> PHY interface (XGMII, XAUI, etc)
- cable interface (10GBASE-T, various fiber-optics choices)
- cable connectors (RJ45, SPF+, various fiber-optics choices, etc)
- PC interface/client PCB for 4 cameras (presumably PCIe 3.0, # of lanes, consider other options)

Remember, the system requires we connect 4 cameras to the PC. Assuming the 4 camera cables plug into one (?or two?) interface/client cards, we must pay close attention to at least three considerations: PC bus bandwidth, PCIe slots, PCIe lanes. Assuming we only plug 4 cameras into a single PC (rather than 8, which is preferred but perhaps not practical), we have a continuous, sustained flood of 40Gbps (8GBps) of data from the cameras that needs to pass across the PCIe bus. This will require at least 8 lanes of PCIe 3.0, but in practice, more likely 16 lanes (the same as top-end GPU cards). In theory this could be distributed across two PCIe 3.0 interface cards with 8 lanes each... assuming motherboard support. I have a feeling far more motherboards will offer two 16-lane PCIe 3.0 slots (to support two GPU cards) than one 16-lane PCIe slot (for GPU) and two 8-lane PCIe slots (for other stuff). But that's just a guess.

-----

Now I'll make a few comments based upon my preliminary research. First and foremost, I am MASSIVELY confused about 10Ge cable/connector/interface options. No matter how many times I read articles and documents, I still don't understand half of what I'm reading. Note that I already designed a vision camera with Cyclone3 FPGA and Marvell 88e1111 PHY (1Ge via RJ45), so it isn't quite as if I'm a completely blank slate on the general topic of ethernet!

Almost nobody talks about prices, so I'm not able to narrow down the research process to something finite (like a specific 10Ge cable/connector/interface type). One thing I may have recognized is... it seems like client cards for 10Ge may be significantly cheaper for certain optical formats than 10GBASE-T copper with familiar RJ45 connectors. I have not seen comments about whether 10GBASE-T with CAT7 and RJ45 will be more or less reliable than the various fiber-optics interfaces when the robots and cables are moving around. I have to assume all interfaces are reliable with fixed cables.

I'm also not sure what's the deal with FPGA prices! I looked at the price of an Arria V part on digikey, and the prices ranged from $2,000 to... the price of a brand new Mercedes! WTF is this? The cyclone3 part in my previous camera cost $14 in unit quantity!

I understand that some of the newer and fancier FPGAs have some spiffy stuff inside (like fast transceivers, microcontrollers, and so forth). But... give me a break! Something doesn't make sense! Obviously I'm missing something.

Which brings me to the FPGA <<===>> PHY interface. The reason the cyclone3 part could drive the 1Ge PHY in my previous camera was because the FPGA <<===>> interface was parallel (8-bits data-in DDR, 8-bits data-out DDR, plus a few control bits). 1000Mbps / 8 = 125MHz DDR (which is a bit like 64Mbps SDR). That's an I/O switching rate that inexpensive FPGAs like cyclone3 can handle.

In the case of 10Ge, the nominal interface is called XGMII, which has 32-bit data-out DDR, 32-bit data-in DDR, plus a few control bits. 10000/Mbps / 32 = 312.5MHz DDR. That's certainly faster than 125MHz DDR, but I would guess should be within the ability of many-year newer FPGAs. True or false? I have to admit, trying to figure out the answer to questions like this from the FPGA specs is like pulling hair (even more so than the cyclone3 process was).

However, there may be another problem. Do any quad (or dual/single) 10Ge PHYs have XGMII interfaces? My research finds lots of 10Ge PHYs that seem to have various 4-bit in and 4-bit out interfaces (XFI, XAUI, others), but I'm not sure I've found any that have XGMII interfaces. What's strange is, some seem to claim XGMII support one place, but when I get to the block diagrams, they aren't visible (or aren't clear). For example, if you go to the following webpage (http://www.marvell.com/transceivers/alaska-x-gbe) and look at the 88X3240P and 88X3140 (or 88X3120) the description includes XGMII and Cu (copper). Yet when you click on the "product brief" link next to those product descriptions and look at the block diagrams, well, it isn't clear whether those parts support XGMII or not! Unfortunately, I haven't been able to get hold of the detailed specifications with BGA pinouts, which would probably let me figure this out definitively. Even though I have an NDA with Marvell from years ago (for the 88e1111 part and others), it has been an endless runaround for months on these two chips (not sure why, but maybe because I look like "small fry", given I'm an independent design engineer, not some mega-corporation).

Anyway, the question of XGMII versus XAUI (or one of the other 1-bit or 4-bit interfaces) MAY BE HUGE when it comes to what FPGA I need to drive the PHY. After all, each receiver or transmitter of a 4-bit interface for 10Ge has to support something like 3.125Gbps (presumably over some sort of two-wire differential scheme like LVDS for each bit in each direction). Yes, the data rate is only 2.500Gbps, but with the overhead of 8b/10b and frame start/stop markers (or whatever), the signal rate has to be 3.125Gbps (or close to that, I believe).

This is where many of you in the forum may have experience that makes this question easy! What are the cheapest FPGAs that can handle the (presumably 312.5Mbps DDR) signal rates of the 32-bit XGMII interface, and what are the cheapest FPGAs that can handle the (presumably 3.125Gbps LVDS) signal rates of XAUI or one of the other FPGA <<===>> PHY interfaces?

Incidentally, I can place the FPGA BGA right smack up against the PHY BGA, so PCB trace-length should not be an issue (probably in the range of 5mm to 20mm trace length, with exact equal length for + and - of each signal in the case of LVDS). The PCB will be somewhere between 8-layer and 12-layer, so signal routing between FPGA <<===>> PHY will not be a major issue.

Another price issue. I don't know whether 10Ge PHYs for 10GBASE-T are significantly more or less expensive than for fiber optics interfaces. If they are, let me know.

Other issues that some of you might be able to address:

Price of 30~100 meter long cat6e or cat7 cables with RJ45 connectors (this I work with already).
Price of 30~100 meter long fiber optics cables with [whatever] connectors (here I know nothing).

The difference between the above two is one part of the cost comparison. It is also important to explain the differences on the PCB between these two. In the case of my 1Ge device, the 88e1111 PHY connected right into a cheap "RJ45 with magnetics" connector (with something like 8 resistors and 8 capacitors in this circuit). I don't know whether the 10Ge connectors are any different or more expensive. I have no idea what the fiber-optics connector for mounting on the camera PCB will cost, and what (if any) other electronics or discretes are required on the PCB to support them.

Assuming the fiber optics cables already have connectors on both ends, I guess the cost of those connectors are irrelevant (included in the cable costs). This is the same for cat6e and cat7 cables with their attached RJ45 connectors.

-----

I'm not sure, but the following might be the order to figure out the choices mentioned above.

Look at all potential "cost killers" and identify "which is much cheaper". For example:
#1: 4x 10Ge PCIe 3.0 interface/client cards for copper versus fiber-optics.
#2: 4x 50 meter cat6e/cat7 cables versus 4x 50 meter fiber-optics.
#3: 1x PHY with XGMII versus XAUI/4-bit interface to FPGA.
#4: 1x FPGA with XGMII speed versus XAUI/4-bit speed.

The above is just me in my highly confused state trying to figure out how to zero in on the appropriate configuration.

Obviously I'd love to hear from everyone with substantial 10Ge experience, since you already understand many or most of the considerations that I outlined above in great detail and with great confidence.

However, the FPGA considerations are also potentially huge! The cost of different FPGAs is astronomical, so quite possibly the best design is whatever allows me to adopt a vastly cheaper FPGA. I don't need a huge quantity of logic, don't need multipliers, lots of internal RAM would be great, but I can do without (with external static RAM).

clock consultation

$
0
0
greetings
hi guys
am having an assignment which it basically a car park machine
it has 4 inputs
which is fifty , twenty, ten, ticket
and i need to detect the input and deduct it from an amount of 1.50
so basically i made the design for the stages
and the decoder to view it in four 7-seg bcd
which starts with
insert ticket
then remove ticket
then the 1.50 to check the input of the user
and heres my problem comes
when i apply the code in de1 altera board
the 7-seg doesn't show the first two stages it goes straight away to the 1.50 stage , that happens due to the clock is so fast
even the inputs, lets say i pressed the input 50 the whole process can be done with single press
and am using 50MHZ clock
so i tried to use a clock divider code to reduce it to 1HZ
but it seems that the problem is still there
so is there any way that i can slow the clock so the first two stages can be shown to the user
---------------------------------------------------------------------------
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;


entity clk200 is
Port (
clk_in : in STD_LOGIC;
rst1 : in STD_LOGIC;
clk_out: out STD_LOGIC
);
end clk200;


architecture Behavioral of clk200 is
signal temporal: STD_LOGIC;
signal counter : integer range 0 to 49999999 := 0;
begin
frequency_divider: process (rst1, clk_in) begin
if (rst1 = '1') then
temporal <= '0';
counter <= 0;
elsif rising_edge(clk_in) then
if (counter = 49999999) then
temporal <= NOT(temporal);
counter <= 0;
else
counter <= counter + 1;
end if;
end if;
end process;

clk_out <= temporal;
end Behavioral;
--------------------------------------------------------------------

what i need is method that slow down the clock to the lowest
thank

avoir bounce with button

$
0
0
Hey,
how can i write my vhdl code to avoid bounce?

Registers

$
0
0
Hi,

I implemented a algorithm using DSP Builder and it has some interactions and acculumators. It needs to wait some time to continue processing by another part of algorithm.

I need some variable or register to store this value. I try to use a DRAM, but it need to treat times of read/write.

Please, can I used one more simple block or another solution to this?

Thanks.
Oswaldo Fratini Filho

W864 MSVS 2012E ERROR: Unable to find Altera OpenCL platform.

$
0
0
Hello, in a previous thread I had troubles with compiling the OpenCL hello world. I have since compiled it correctly but I was unable to edit it since it somehow deleted itself (I must have deleted it by accident? Sorry...) If a mod is reading this, I think I deleted it somehow? If you could restore it I could give a step-by-step on what I did to fix it, as a lot of people might have the same problems as me.

I have a new problem that might stem from not fully understanding how to operate my device DE0-Nano-SoC Kit

I get the following error: "ERROR: Unable to find Altera OpenCL platform." inside the init() function of the hello world example.

Quote:

bool init() {
cl_int status;


if(!setCwdToExeDir()) {
return false;
}


// Get the OpenCL platform.
platform = findPlatform("Altera");
if(platform == NULL) {
printf("ERROR: Unable to find Altera OpenCL platform.\n");
return false;
}

I have the device powered on with the AC plug in. I also have the USB device it came with: Type A to Mini-B USB Cable x1

POSSIBLE REASONS:

1: I dont have the "USB-BLASTER II" device (that I was previous unaware I would need such a thing).
2: Incorrect drivers installed.

If this is number 1 related then my next question is:

How can I communicate with my device? I want to run OpenCL code from a C++ program.

DE2 board GPIO Pin Output Voltage

$
0
0
Hi,

I am using de2 board for a project, and I am using the pins from JP1(GPIO_0) as my output pins. So I have three output pins from that, and two of them having the correct voltage level which is 0~3.3V, but there is one pin is output -1.4~2V. So I am wondering what is causing this problem and how to fix it? Thank you in advance!

mod operation synthesis (Avalon-MM master writing to DRAM problem)

$
0
0
Hi. I'm trying to write a simple Avalon-MM master component which writes an image to the DRAM memory. I use this component in a Qsys system with Nios II/e processor, SDRAM controller and University Program video cores. I'm trying this on DE2-115 board (Cyclone IV EP4CE115F29C7, 50 MHz clock) connected to a monitor via VGA. The SDRAM memory and the contoller, and my component are driven by 167 MHz clock generated by a PLL. The display part consists of UP video cores; DMA which reads the 800x600 8-bit grayscale image and VGA controller.

My component source is here https://gist.github.com/woky/a9a02ac...21262d0c607255. (It's also below but gist has line numbers). The component is either waiting for arrival of an address on ctl interface or writing an image to the address received on the ctl interface. The image is just black top half and white bottom half. In main() in my Nios program I just allocate memory via malloc() and write its address into the UP video DMA and my component. Please ignore debug_* signals, they're just for debugging purposes (displaying state on 7 seg displays and leds).

I originally used the mod operation on pixel_counter (commented in the code), but results were varying and wrong. Sometimes it looked the image wasn't written at all but the writing branch was entered (LED on debug_out(1)). Sometimes the main() froze on something. Sometimes it wrote just 256 or 512 or 4096 pixels (observed only via pixel_counter on 7 segs but not via screen). It's enough to uncomment line 66 and comment line 67 to unleash the madness.

What could be the reason for this strange and unpredictable behaviour?

Thank you.

Code:

library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;


entity frame_writer is
    port (
        clk            : in  std_logic                    := '0';            --  clk.clk
        reset          : in  std_logic                    := '0';            -- reset.reset
        ctl_write      : in  std_logic                    := '0';            --  ctl.write
        ctl_writedata  : in  std_logic_vector(31 downto 0) := (others => '0'); --      .writedata
        wr_address    : out std_logic_vector(31 downto 0);                    --    wr.address
        wr_burstcount  : out std_logic_vector(10 downto 0);                    --      .burstcount
        wr_waitrequest : in  std_logic                    := '0';            --      .waitrequest
        wr_writedata  : out std_logic_vector(31 downto 0);                    --      .writedata
        wr_write      : out std_logic;                                        --      .write
        debug_out      : out std_logic_vector(127 downto 0);                    -- debug.debug_out
        debug_in      : in  std_logic_vector(127 downto 0) := (others => '0')  --      .debug_in
    );
end entity frame_writer;


architecture rtl of frame_writer is
    constant FRAME_SIZE: natural := 800 * 600;


    signal pixel_counter: natural;
    signal start_write: std_logic;
    signal writeaddr: std_logic_vector(31 downto 0);
begin
    wr_burstcount <= "00000000001";


    debug_out(38 downto 20) <= std_logic_vector(to_unsigned(pixel_counter, 19));


    process (clk, reset)
    begin
        if reset = '1' then
            start_write <= '0';
            pixel_counter <= 0;


            debug_out(1 downto 0) <= (others => '0');
        elsif rising_edge(clk) then
            --if start_write = '0' and pixel_counter = 0 then
            if start_write = '0' and (pixel_counter = 0 or pixel_counter >= FRAME_SIZE) then
                wr_write <= '0';
                pixel_counter <= 0;
                wr_address <= (others => '0');
                wr_writedata <= (others => '0');


                if ctl_write = '1' then
                    start_write <= '1';
                    writeaddr <= ctl_writedata;
                end if;


                debug_out(0) <= '0';
            else
                wr_write <= '1';
                wr_address <= std_logic_vector(unsigned(writeaddr) +
                        to_unsigned(pixel_counter, wr_address'length));


                if pixel_counter < FRAME_SIZE/2 then
                    wr_writedata <= x"00000000";
                else
                    wr_writedata <= x"ffffffff";
                end if;


                if wr_waitrequest = '0' then
                    start_write <= '0';
                    --pixel_counter <= (pixel_counter + 4) mod FRAME_SIZE;
                    pixel_counter <= pixel_counter + 4;
                end if;


                debug_out(0) <= '1';
                debug_out(1) <= '1';
            end if;
        end if;
    end process;


end architecture rtl;

DE2I150 - Nios compilation error

$
0
0
Hello,

I am a newbie in using nios. I am trying to store an image which is sdcard onto a sdram. My c code has this function: IOWR_16DIRECT(SDRAM_BASE,(((i+80)*640+(j))-40),pixel) . When I compile I get this error " implicit declaration of function IOWR_16DIRECT ". I am using system.h and altera_up_sd_card_avalon_interface.h header files as well.

Please let me know how I work around this.

Thanks

LVDS timing issue in cyclone v device

$
0
0
Hello,

I have got some problems to implement two copies of the same interface into a cyclone v device.

- Each interface is edge-aligned and the lvds data and lvds clock are mapped on the same bank.
- The clock speed is at 200 MHz and the data is sampled as DDR.
- For implementation i used the ALTLVDS_RX ip-core. In the documentation "Cyclone V Device Handbook Volumne 1, Receiver Blocks in Cyclone V Devices" it is mentioned that, with a DDR interface and a serialization factor of 2, the ip-core will bypass any deserializer functionality.
- That means in my case that he will include, by using this core, automaticaly the altddio ip-core and therefore i need a pll (integer, clock synchronous) with a 90° phase shift.
- Each interface got his own clock domain, there is no clock domain crossing, the data is written into mlab blocks.

So my problem is, that with the identically constraining of the inputs, i got a setup slack of - 2.0 ns at the inputs of the altddios. The other interface is without any timing isues.
Is there any mistake in my implementation and, if not, are there any options?

I really appreciate any help
Viewing all 19390 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>