With the new threading architecture of the UE dlsim
(and others) does not work properly anymore.
When looking at the scope, you see a difference
in PDSCH LLR display. The end is always 0 where
in the current develop branch (tag 2017.w25) it's not.
This commit attempts to fix it.
We still don't have the same behavior as in 2017.w25.
I disabled channel simulation (so that UE RX = eNB TX)
and I have one error where in 2017.w25 I have zero.
For example, here comes the output of a run of "./dlsim":
**********************SNR = 0.000000 dB (tx_lev 51.000000)**************************
Errors (1(0)/1000 0/1 0/0 0/0), Pe = (1.000000e-03,0.000000e+00,-nan,-nan), dci_errors 0/1001, Pe = 0.000000e+00 => effective rate 99.900100, normalized delay 0.001472 (1.001000)
And in 2017.w25 we have (with the same hack to disable
channel simulation):
**********************SNR = 0.000000 dB (tx_lev 51.000000)**************************
Errors (0(0)/1000 0/0 0/0 0/0), Pe = (0.000000e+00,-nan,-nan,-nan), dci_errors 0/1000, Pe = 0.000000e+00 => effective rate 100.000000, normalized delay 0.001471 (1.000000)
There may be a problem somewhere. Or there was one before and we should
have had one error and the new UE architecture fixed things and now
it's as it has to be. Hard to say at this point...
When looking at the scope we quickly see some zeros for the PDSCH
LLR, at the begining this time, not at the end. This is just when
the GUI appears and then all is fine, so this seems to be for the
first frame only. In 2017.w25 this does not happen.

With the current implementation of oaisim
(rxdata and channel simulation), we cannot
call trx_read_func on a dummy buffer. The
code will actually modify the rxdata buffers
of the UE.
This is has to be rewritten properly. In the
meantime, let's introduce a simple hack. The
idea of the read at this point is to wait for
the synch to finish and not lose samples from
the hardware in the real UE. In the simulator,
as it is today, we can simply sleep until the
synch code has finished its work.

This bug happens when we detect uplink failure for one UE.
In this case, a DCI format 1A is sent to the UE to ask it
to do random acces.
The way this DCI is generated was not compatible with how
the software is organized. It was expected that the DCI are
added (with add_ue_spec_dci and add_common_dci) in a very
specific order: first all DCIs in common space are added
(with add_common_dci) then all DCIs in UE specific space
are added (with add_ue_spec_dci).
The problem was that the DCI format 1A DCI sent to the UE
for it to do random access is added (with add_ue_spec_dci)
before the DCIs in common space.
That totally messed up the logic in add_common_dci and
add_ue_spec_dci.
The solution is to get rid of Num_common_dci and Num_ue_spec_dci,
replace those two counters by only one (Num_dci) and add
"search_space" in the dci_alloc structure to be used later by
the function "allocate_CCEs" when calling "get_nCCE_offset".
The software had to be adapted to the new variables, everywhere.
I am not sure that the simulators work. It seems that some
of them didn't use Num_common_dci and Num_ue_spec_dci to
decide on what space (common or UE specific) to put the DCI,
but relied on the rnti (comparing with SI_RNTI). To be tested
properly.
The modified simulators are:
- openair1/SIMULATION/LTE_PHY/dlsim.c
- openair1/SIMULATION/LTE_PHY/dlsim_tm4.c
- openair1/SIMULATION/LTE_PHY/dlsim_tm7.c
- openair1/SIMULATION/LTE_PHY/framegen.c
- openair1/SIMULATION/LTE_PHY/pdcchsim.c
- openair1/SIMULATION/LTE_PHY/syncsim.c

changes:
- ue mcc/mnc 208.93
- use correct key/opc for user
- change addresses in conf file for them to be easier to understand
(maybe)
With those changes, running:
sudo ./run_enb_ue_virt_s1
in cmake_targets/tools should work out of the box
The user still has to configure correct IP addresses in
targets/PROJECTS/GENERIC-LTE-EPC/CONF/enb.band7.generic.oaisim.local_mme.conf
We supposed oaisim (enb+ue) machine to be on IP address 10.0.1.1
and EPC (hss, mme, spgw) machine to be on IP address 10.0.1.2.

Several problems are present.
The first is that _write returns 0 instead of the
number of samples. We solve it by returning
nsamps.
The second is that _read may return less samples at
the beginning and we don't want to exit for that.
We solve it also by returning nsamps.
(We still need to log more in this, to be done in the
next commit.)
The third is that after initialization we don't send
anything for a while, time for the softmodem to finish
its init. This generates lots of "RX overrun".
We solve it by disabling TX and RX modules after init
and then in trx_brf_start we activate them again (and
also call bladerf_sync_config, which seems to be
mandatory, and bladerf_calibrate_dc, which may be avoided,
perhaps).
Maybe not the end of the story. Sometimes it works, UE connects,
traffic is fine (tested only with 5MHz). Sometimes it does not,
UE does not connect, or it connects but then traffic is bad,
especially uplink.
To be refined.