As a result of continuous cost reduction and device miniaturization, small UAVs are now more easily accessible to the public. Consequently, numerous new applications in the civilian and commercial domains have emerged. However, despite regulations, non-cooperative UAVs have started to abuse low-altitude airspace with potential security and safety problems. In this work, we present a new GNSS spoofing based counter-UAV defense system, which is able to flexibly, friendly, and remotely control a non-cooperating UAV to fly to a location we specify for capture. Our simulation and field study show the effectiveness of such a defense technique.

Due to the higher wireless transmission rates in the 5G cellular networks, higher computation overhead is incurred in smartphones, which can cause the wireless transmission rates to be limited by the computation capability of wireless terminals. In this case, is there a maximum receiving rate for smartphones to maintain stable wireless communications in 5G cellular networks? The main objective of this article is to investigate the maximum receiving rate of smartphones and its influence on 5G cellular networks. Based on Landauer’s principle and the safe temperature bound on the smartphone surface, a maximum receiving rate of the smartphone is proposed for 5G cellular networks. Moreover, the impact of the maximum receiving rate of smartphones on the link adaptive transmission schemes has been investigated. Numerical analyses imply that the maximum receiving rate of smartphones cannot always catch up with the downlink rates of future 5G cellular networks. Therefore, the link adaptive transmission scheme for future 5G cellular networks has to take the maximum receiving rate of smartphones into account.

Constant energy supply for sensor nodes is essential for the development of the green Internet of Things (IoT). Recently, WRSNs have been proposed to resolve the energy limitations of nodes, aiming to realize continuous functioning. In this article, a coverage-aware hierarchical charging algorithm in WRSNs is proposed, considering energy consumption and the degree of node coverage. The algorithm first performs network clustering using the K-means algorithm. In addition, nodes are classified into multiple levels in each cluster to calculate respective anchor points based on the energy consumption rate and coverage degree of nodes. Then, the anchor points converge to an optimized anchor point in each cluster. To reduce charging latency, the optimized anchor points form two disjoint internal and external polygons. Next, mobile chargers travel along the internal and external polygons, respectively. Experimental results indicate that the proposed algorithm can improve charging efficiency and reduce charging latency substantially.

Distinguishing outliers from normal data in wireless sensor networks has been a big challenge in the anomaly detection domain, mostly due to the nature of the anomalies, such as software or hardware failures, reading errors or malicious attacks, just to name a few. In this article, we introduce an anomaly detection-based OPF classifier in the aforementioned context. The results are compared against one-class support vector machines and multivariate Gaussian distribution. Additionally, we also propose to employ meta-heuristic optimization techniques to fine-tune the OPF classifier in the context of anomaly detection in wireless sensor networks.

Virtualization and emulation have become worthy approaches to save significant amounts of money related to physical resource acquisition expenditures. In light of this, the development of network emulation platforms has led a revolution in the research and testing of novel services and applications, as they provide a cost-effective, flexible and reproducible environment for experimentation. However, they present some practical issues, in particular, their scalability is one of the limiting factors as it links the emulated networks that can be successfully deployed on a given hardware. We address this matter by testing the consumption and exploitation of physical resources of one popular network emulation platform, Mininet. We follow a methodology based on the isolation of the threads associated to the operating system, the virtual hosts, and the monitoring tasks. In such a manner, this approach can measure the effect of the placement of threads in the available cores, and help optimize bottlenecks that jeopardize the results of network emulations. Additionally, we monitor several key performance indicators for general-purpose Mininet deployments in different network topologies, varying the number of active elements, links and network conditions such as packet loss or delay. Our results show that Mininet presents performance bounds in commodity servers that suffice a wide range of general network tests. It achieves aggregated bandwidths above 10 Gb/s and median round-trip time values around 1 ms, even in demanding scenarios where more than a thousand hosts, up to 64-hop paths and 64 subnets are included in the emulated topologies.

The envisioned 5G ecosystem will be composed of heterogeneous networks based on different technologies and communication means, including satellite communication networks. The latter can help increase the capabilities of terrestrial networks, especially in terms of higher coverage, reliability, and availability, contributing to the achievement of some of the 5G KPIs. However, technological changes are not immediate. Many current satellite communication networks are based on proprietary hardware, which hinders the integration with future 5G terrestrial networks as well as the adoption of new protocols and algorithms. On the other hand, the two main paradigms that are emerging in the networking scenario -- software defined networking (SDN) and network functions virtualization -- can change this perspective. In this respect, this article presents first an overview of the main research works in the field of SDN satellite networks in order to understand the already proposed solutions. Then some open challenges are described in light of the network slicing concept by 5G virtualization, along with a possible roadmap including different network virtualization levels. The remaining unsolved problems are related to the development and deployment of a complete integration of satellite components in the 5G ecosystem.

5G systems have started field trials, and deployment plans are being formulated, following completion of comprehensive standardization efforts and the introduction of multiple technological innovations for improving data rates and latency. Similar to earlier terrestrial wireless technologies, build-out of 5G systems will occur initially in higher population density areas offering the best business cases while not fully addressing airborne and marine applications. Satellite communications will thus continue to be indispensable as part of an integrated 5G/satellite architecture to achieve truly universal coverage. Such a unified architecture across terrestrial and satellite wireless technologies can ensure global service, support innovative 5G use cases, and reduce both capital investments and operational costs through efficiencies in network infrastructure deployment and spectrum utilization. This article presents an architectural framework based on a layered approach comprising network, data link, and physical layers together with a multimode user terminal. The network layer uses off-the-shelf building blocks based on 4G and 5G industry standards. The data link layer benefits from dynamic sharing of resources across multiple systems, enabled by intersystem knowledge of estimated and actual traffic demands, RF situational awareness, and resource availability. Communication resource sharing has traditionally comprised time, frequency, and power dimensions. Sharing can be enhanced by leveraging dynamic knowledge of communication platform location, trajectory, and antenna directivity. Logically centralized resource management provides a scalable approach for better utilization of spectrum, especially in higher bands that have traditionally been used by satellites and now are also being proposed for 5G systems. Resource sharing maximizes the utility of a multimode terminal that can access satellite or terrestrial RF links based on specific use cases, traffic demand, and QoS requirements.

Future 5G mobile communication systems are expected to integrate different radio access technologies, including the satellite component. Within the 5G framework, the terrestrial services can be augmented with the development of HTS systems and new mega-constellations meeting 5G requirements, such as high bandwidth, low latency, and increased coverage including rural areas, air, and seas. This article provides an overview of the current 5G initiatives and projects followed by a proposed architecture for 5G satellite networks where the SDN/NFV approach facilitates the integration with the 5G terrestrial system. In addition, a novel technique based on network coding is analyzed for the joint exploitation of multiple paths in such an integrated satellite-terrestrial system. For TCP-based applications, an analytical model is presented to achieve an optimal traffic split between terrestrial and satellite paths and optimal redundancy levels.

The space-ground integrated network appears to be a promising solution for providing global coverage and broadband communications. Over the past several years, many pioneering research works toward space-ground integrated networks have emerged. However, it has been noticed that most existing research works have neglected the impact of satellite gateway placement on network reliability, in particular, not considering the practical capacity constraint on satellite links. In light of this, we study in this article the gateway placement problem with capacity constraint in the space-ground integrated network, and propose an enumeration scheme and a heuristic greedy scheme as our solutions. Extensive numerical results validate the efficacy of our proposed strategies.

The demands of 5G mobile communications, such as higher throughput and lower latency in future communications, has promoted more advanced communication technologies and heterogeneous network integration. Satellite communication has been recently considered as a key part of 5G for the benefits of meeting the availability and ubiquitous coverage requirements targeted by 5G. As a potential 5G technology, multi-satellite cooperative transmission systems have been studied, as they can provide the high throughput brought by virtual multiple-input multiple-output structures. In this article, we briefly review the concepts and techniques of multi-satellite cooperative transmission systems in 5G. Moreover, the architectures of two multi-satellite relay transmission systems based on TDMA and NOMA are introduced. In particular, we focus more on the performance evaluation and research challenges of the TDMAbased architecture to explore its system optimizing approach. Finally, to exploit the full potential of multi-satellite systems in 5G networks, future trends and challenges are discussed.

5G traffic expectations require not only the appropriate access infrastructure, but also the corresponding backhaul infrastructure to ensure well-balanced network scaling. Optical fiber and terrestrial wireless backhaul will hardly meet 100 percent coverage, and satellite must be considered within the 5G infrastructure to boost ubiquitous and reliable network utilization. This work presents the main outcomes of the SANSA project, which proposes a novel solution that overcomes the limitations of the traditional fixed backhaul. It is based on a dynamic integrated satellite- terrestrial backhaul network operating on the mmWave band. Its key principles are seamless integration of the satellite segment into terrestrial backhaul networks, a terrestrial wireless network capable of reconfiguring its topology according to traffic demands, and aggressive frequency reuse within the terrestrial segment and between terrestrial and satellite segments. The two technological enablers of SANSA are smart antenna techniques at mmWave and software defined intelligent hybrid network management. This article introduces these 5G enablers, which permit satellite communications to play a key role in different 5G use cases, from the early deployment of 5G services in sparse scenarios to enhanced mobile broadband in denser scenarios.

The fifth generation of mobile radio communication systems, dubbed 5G, has the challenge of coping with tremendous increases in data traffic volume and peak data rates, reduced latencies, along with improved energy-efficient transmissions and new use cases. In fact, besides the traditional MBB services, future 5G networks will face the opportunity to embody connections to billions of objects, the so-called IoT or mMTC. The article addresses the question of what is the possible role of satellite systems in mMTC services. Key satellite mMTC system design trade-offs are outlined jointly with some examples of network sizing.

The rising number of airplanes and UAVs requiring connectivity in the sky puts high demands on all types of networks. In areas without DA2GC coverage, such as sea or oceans, the only option is SC. However, the capacity of SC is limited and insufficient in some cases. Therefore, an extension of DA2GC by A2AC and integration of A2AC and SC is a promising solution to improve available capacity. The main aim of this article is to evaluate limits of SC and A2AC in terms of maximum available capacity and maximum range. The results show that A2AC through DA2GC backhauling is able to overcome capacity available by SC on certain conditions while flying close to the mainland. Integration of A2AC and SC can significantly improve data rate available for UAVs and airplanes, especially if the sky cannot be covered by DA2GC only. Our results show that A2AC can provide capacity up to 93 Mb/s, and it can even exceed the capacity of SC when DA2GC and A2AC distances are short. When SC is overloaded, A2AC can be used instead to provide airplanes with capacity of 37 Mb/s up to 432 km DA2GC distance and 340 km A2AC distance. Besides the evaluation of both networks, the article also summarizes and discusses potential challenges and open issues of integration that need to be considered on the way to successful cooperation of both networks.

This article addresses the issue of joint aerial- terrestrial resource management in mobile radio networks supported by a UAV operating as network node and discusses the potential of true integration between the terrestrial and UAV components of the network. A simulation campaign shows that, by properly optimizing the system parameters related to the UAV flight, a single UAV can bring significant improvement in network throughput for a wide service area. The use of a joint radio resource management approach, where the UAV and terrestrial base stations operate in a coordinated manner, brings significant advantages with respect to different algorithms.

The effort of telecommunications operators in 5G design and implementation is oriented to effective and efficient verticals support. Full network softwarization will deploy tools such as software-defined networks and network functions virtualization to allow dynamic service provisioning on the same physical infrastructure. However, current 5G proposed architecture still requires further enhancement to guarantee full real-time on-demand services. Current virtualization technologies still miss complete automation and adaptation, while they do not have dynamic negotiation capabilities. These issues are bounding the desired complete automation and the significant reduction in OPEX and CAPEX. Thus, this article proposes a full architecture to realize autonomic mobile virtual network operators. These autonomic virtual operators can be deployed by Internet service providers to guarantee efficient and effective network adaptation to unexpected events and real-time resource requests. The objective of an AMVNO is to exploit softwarization of networks and experiential intelligence to play the role of a local operator by performing network management and service release without human intervention. Its autonomic character can also provide proactive actions against unwanted network states.

UAV aided communication technology holds tremendous potential to upgrade outdoor link throughput and provide on-demand wireless services. The flexible deployment characteristic makes UAV-aided networks competent at emergency situations, including natural disasters and sudden traffic hotspots. In this backdrop, UAVs are required to be densely deployed to accommodate the huge volume of data traffic, where interference amid the neighboring cells turns out to be extremely challenging. To break this stalemate, this article systematically investigates spectrum sharing technology for ultra-dense UAV-aided networks from the architecture level down to the physical layer. We shed light on design principles and key challenges in utilizing overlapped spectrum for interference-enabled concurrent transmissions. With these principles in mind, we introduce SpecShare, which utilizes coding redundancy at the PHY layer for UAV spectrum sharing. We explore the optimal UAV placement strategy in the network layer to fully unleash the potential of such spectrum sharing capacity. We discuss the feasibility of SpecShare, and demonstrate its effectiveness in terms of network throughput.

In various scenarios, achieving security between IoT devices is challenging since the devices may have different dedicated communication standards and resource constraints as well as various applications. In this article, we first provide requirements and existing solutions for IoT security. We then introduce a new reconfigurable security framework based on edge computing, which utilizes a near-user edge device, that is, a security agent, to support security functions as IoT resources for the security requirements of all protocol layers including multiple applications on an IoT device. This framework is designed to overcome the challenges including high computation costs, low flexibility in key management, and low compatibility in deploying new security algorithms in IoT, especially when adopting advanced cryptographic primitives. We also provide the design principles of the reconfigurable security framework, the exemplary security protocols for anonymous authentication and secure data access control, and the performance analysis in terms of feasibility and usability. The reconfigurable security framework paves a new way to strengthen IoT security by edge computing.

As a paradigm of Internet of Things (IoT), social sensor cloud (SSC), which connects the social networks and the sensor networks as well as the cloud, is receiving growing attention from both the academic and industrial communities. Aiming to be a timely guidance with respect to SSC, this article investigates SSC from four perspectives: framework, greenness, issues, and outlook. Specifically, a framework for SSC is introduced first. Then, with the initiative to reduce the carbon footprint, the mechanisms that enable green SSC are discussed. With that, the open research issues for SSC are presented. Finally, an outlook for SSC is shown.

Cloud computing is now a popular computing paradigm that can provide end users access to configurable resources on any device, from anywhere, at any time. During the past years, cloud computing has been developed dramatically. However, with the development of the Internet of Things, the disadvantages (such as high latency) of cloud computing are gradually revealed due to the long distance between the cloud and end users. Fog computing is proposed to solve this problem by extending the cloud to the edge of the network. In particular, fog computing introduces an intermediate layer called fog that is designed to process the communication data between the cloud and end users. Hence, fog computing is usually considered as an extension of cloud computing. In this article, we discuss the design issues for data security and privacy in fog computing. Specially, we present the unique data security and privacy design challenges presented by the fog layer and highlight the reasons why the data protection techniques in cloud computing cannot be directly applied in fog computing.

Edge computing has great potential to address the challenges in mobile vehicular networks by transferring partial storage and computing functions to network edges. However, it is still a challenge to efficiently utilize heterogeneous edge computing architectures and deploy large-scale IoV systems. In this article, we focus on the collaborations among different edge computing anchors and propose a novel collaborative vehicular edge computing framework, called CVEC. Specifically, CVEC can support more scalable vehicular services and applications by both horizontal and vertical collaborations. Furthermore, we discuss the architecture, principle, mechanisms, special cases, and potential technical enablers to support the CVEC. Finally, we present some research challenges as well as future research directions.

In recent years, UAVs have received much attention in both the military and civilian fields for monitoring, emergency relief and searching tasks. UAVs are considered a new technology to obtain data at high altitudes when equipped with sensors. This technology is vital to the success of next-generation monitoring systems, which are expected to be reliable, real-time, efficient and secure. However, due to the bandwidth limitations in UAV-aided networks, the size of the transmitted data is a crucial factor for real-time media data transmission requirements, especially for national defense. To address this issue, in this article, we propose a realtime end-to-end media data transmission mechanism with an unsupervised deep neural network. The proposed mechanism transmutes the media data captured by UAVs into latent codes with a predefined constant size and transmits the codes to the ground console station (GCS) for further reconstruction. We use a real-word dataset containing millions of samples to evaluate the proposed mechanism which achieves a high transmission ratio, low resource usage and good visual quality.

Network Function Virtualization (NFV) allows datacenters to consolidate network appliance functions onto commodity servers and devices. Currently telecommunication carriers are re-architecting their central offices as NFV datacenters that, along with SDN, help network service providers to speed deployment and reduce cost. However, it is still unclear how a carrier network shall organize its NFV datacenter resources into a coherent service architecture to support global network functional demands. This work proposes a hierarchical NFV/SDN-integrated architecture in which datacenters are organized into a multi-tree overlay network to collaboratively process user traffic flows. The proposed architecture steers traffic to a nearby datacenter to optimize user-perceived service response time. Our experimental results reveal that the 3-tier architecture is favored over others as it strikes a good balance between centralized processing and edge computing, and the resource allocation should be decided based on traffic's source-destination attributes. Our results indicate that when most traffic flows within the same edge datacenter, the strategy whereby resources are concentrated at the carrier's bottom-tier datacenters is preferred, but when most traffic flows across a carrier network or across different carrier networks, a uniform distribution over the datacenters or over the tiers, respectively, stands out from others.

WiFi networks can offload data traffic from congested cellular networks in a cost-effective way. However, it is difficult to perform WiFi offloading in mobile environments due to the complicated and time consuming access procedure of WiFi networks. In this article, we first investigate mobile traffic offloading by leveraging the HS-2.0 technique, which greatly simplifies the access procedure and provides a novel signaling diagram to enable automatic association and seamless roaming for mobile users. We then compare the legacy HS-1.0 with HS-2.0, and study the impacts of HS-2.0 on mobile traffic offloading by considering the pedestrian case and the drive-thru Internet case, respectively. We develop an HS-2.0 traffic offloading prototype and provide useful results through empirical measurements. Finally, we show the research issues for HS-2.0 mobile traffic offloading.

Communication in cellular networks is based on serving cells that provide the basic network service. In the real world, serving cells overlap which means the number of serving cells covering one position is usually more than one. Recently, the instability of mobility management in cellular networks has been studied to monitor and analyze the handoff process in mobile devices. However, the handoff process is actually produced by base stations instead of mobile devices. Hence, it is of great importance to measure the handoff process of mobility management from the base station side. In this article, we present a series of experiments performed using the data obtained by mobile network operators. The contributions of this study are three-fold. We reproduce a handoff process and handoff loop from both the mobile device level and the base station level, and confirm the existence of a handoff loop by measurements from the base station side. Through large-range measurements, we discover that only a small part of serving cells is involved in the handoff process, and in most cases, the number of candidate serving cells is much smaller than the number of cells that cover some position; namely, when a handoff loop occurs, the number of candidate serving cells is quite small, which is in contrast to our assumption. We confirm that the handoff loop often occurs in indoor conditions or when the mobile device has frequent communication with the base station. Finally, we present several comprehensive facts about the handoff process and handoff loop and provide suggestions that can be used to increase the quality of service of cellular networks.

Multimedia cloud computing has appeared as a very attractive environment for the business world in terms of providing cost-effective services with a minimum of entry costs and infrastructure requirements. There are some architecture proposals in the related literature, but there is no multimedia cloud computing architecture with hybrid features specifically designed for mobile devices. In this article, we propose a new multimedia hybrid cloud computing architecture and protocol. It merges existing private and public clouds and combines IaaS, SaaS and SECaaS cloud computing models in order to find a common platform to deliver real time traffic from heterogeneous multimedia and social networks for mobile users. The developed protocol provides suitable levels of QoS, while providing a secure and trusted cloud environment.

Opportunistic routing and network coding are two promising techniques that have been proposed for wireless networks. These techniques have significantly improved the performance of multi-hop wireless networks by utilizing the broadcast nature of wireless media and optimizing the capacity of lossy wireless networks. Recent research has shown that the combination of opportunistic routing and network coding in a single joint protocol outperforms each of them individually. This article explains the motivation and interaction effect of the joint protocols. We provide a taxonomy of joint protocols and illustrate the benefit and cost by highlighting their fundamental components and comparing different solutions. We also present a conclusion along with the outline of future research direction.

Small cell base stations (SBSs) in 5G and beyond networks will play multiple roles in addition to radio access, and will be deployed in a very high density. They can facilitate mobile edge computing (MEC), such that SBSs can store, process, and control network data signals, becoming important elements in ultra-dense networks (UDNs). However, intercell interference may jeopardize the gains from network densification, and thus coordinated multi-point (CoMP) transmission is important in UDNs. MEC and CoMP are different technologies, but they can work jointly to benefit UDNs. In this article, we aim to bridge the gap between MEC and CoMP based on three MEC functions. We will show that CoMP transmission reduces UDN complexity and transmission delay with collaboration among MEC servers in UNDs. In return, MEC-enabled UDNs can alleviate the backhaul pressure on CoMP transmission and make it possible to implement scalable CoMP.

Edge computing has evolved to be a promising avenue to enhance system computing capability by offloading processing tasks from the cloud to edge devices. In this article, we propose a multi-layer edge computing framework called EdgeFlow. In this framework, different nodes ranging from edge devices to cloud data centers are categorized into corresponding layers and cooperate for data processing. EdgeFlow can deal with the trade-off between the computing and communication capabilities so that the tasks can be assigned to each layer optimally. At the same time, resources are carefully allocated throughout the whole network to mitigate performance fluctuation. The proposed open-source data flow processing framework is implemented on a platform that can emulate various computing nodes in multiple layers and corresponding network connections. Evaluated on the face recognition scenario, EdgeFlow can significantly reduce task finish time and perform more tolerance to run-time variations, compared with pure cloud computing, pure edge computing, and Cloudlet. Potential applications of EdgeFlow, including network function virtualization, Internet of Things, and vehicular networks, are also discussed at the end of this article.

In legacy networks, network functions (e.g., firewalls, NAT, QoS) are highly dependent on dedicated hardware. It is sophisticated and costly for network operators to determine the order of network functions and stitch them together to create service chains for end users, especially with the explosive growth of end-users and Internet services. The recent emergence of NFV and SDN technologies provide benefits for service function chaining and reduce OPEX and CAPEX for network operators. This article presents a typical framework to construct service chains by combining SDN and NFV, and describes several critical issues in this field, expecting readers to catch current research progress and make their new contributions in this field.

Cognitive radio ad hoc network (CRAHN) has been considered an efficiently spectrum-aware communication paradigm in wireless networks because of its intrinsic properties of cognition and self-organization. In practice, because primary and secondary ad hoc networks are usually associated with two different types of wireless networks, the synchronization between primary users and secondary users is hardly guaranteed. The multi-channel asynchronous CRAHNs referred to as multi-channel non-time-slotted CRAHNs impose many challenging problems that severely degrade system performance. In this article, we review these challenging issues in multi-channel non-time-slotted CRAHNs, including reactivation-failure, frequently unexpected hand-offs, non-real-time spectrum aggregation, inefficient power allocation, and frequent re-routing problems. Then, we develop a full-duplex based framework to resolve these issues. Future research directions are discussed to improve system performance of multi-channel non-time-slotted CRAHNs.

VANETs enable information exchange among vehicles, other end devices and public networks, which plays a key role in road safety/infotainment, intelligent transportation systems, and self-driving systems. As vehicular connectivity soars, and new on-road mobile applications and technologies emerge, VANETs are generating an ever-increasing amount of data, requiring fast and reliable transmissions through VANETs. On the other hand, a variety of VANETs related data can be analyzed and utilized to improve the performance of VANETs. In this article, we first review VANETs technologies to efficiently and reliably transmit big data. Then, the methods employing big data for studying VANETs characteristics and improving VANETs performance are discussed. Furthermore, we present a case study where machine learning schemes are applied to analyze VANETs measurement data for efficiently detecting negative communication conditions.

As an integral part of V2G networks, EVs receive electricity from not only the grid but also other EVs and may frequently feed the power back to the grid. Payment records in V2G networks are useful for extracting user behaviors and facilitating decision-making for optimized power supply, scheduling, pricing, and consumption. Sharing payment and user information, however, raises serious privacy concerns in addition to the existing challenge of secure and reliable transaction processing. In this article, we propose a blockchain-based privacy preserving payment mechanism for V2G networks, which enables data sharing while securing sensitive user information. The mechanism introduces a registration and data maintenance process that is based on a blockchain technique, which ensures the anonymity of user payment data while enabling payment auditing by privileged users. Our design is implemented based on Hyperledger to carefully evaluate its feasibility and effectiveness.

In the past decade, it was a significant trend for surveillance applications to send huge amounts of real-time media data to the cloud via dedicated high-speed fiber networks. However, with the explosion of mobile devices and services in the era of Internet-of-Things, it becomes more promising to undertake real-time data processing at the edge of the network in a distributed way. Moreover, in order to reduce the investment of network deployment, media communication in surveillance applications is gradually changing to be wireless. It consequently poses great challenges to detect objects at the edge in a distributed and communication-efficient way. In this article, we propose an edge computing based object detection architecture to achieve distributed and efficient object detection via wireless communications for real-time surveillance applications. We first introduce the proposed architecture as well as its potential benefits, and identify the associated challenges in the implementation of the architecture. Then, a case study is presented to show our preliminary solution, followed by performance evaluation results. Finally, future research directions are pointed out for further studies.